Littérature scientifique sur le sujet « DL. Archives »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « DL. Archives ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "DL. Archives"

1

Buyukdemircioglu, M., S. Kocaman et M. Kada. « DEEP LEARNING FOR 3D BUILDING RECONSTRUCTION : A REVIEW ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (30 mai 2022) : 359–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-359-2022.

Texte intégral
Résumé :
Abstract. 3D building reconstruction using Earth Observation (EO) data (aerial and satellite imagery, point clouds, etc.) is an important and active research topic in different fields, such as photogrammetry, remote sensing, computer vision and Geographic Information Systems (GIS). Nowadays 3D city models have become an essential part of 3D GIS environments and they can be used in many applications and analyses in urban areas. The conventional 3D building reconstruction methods depend heavily on the data quality and source; and manual efforts are still needed for generating the object models. Several tasks in photogrammetry and remote sensing have been revolutionized by using deep learning (DL) methods, such as image segmentation, classification, and 3D reconstruction. In this study, we provide a review on the state-of-the-art machine learning and in particular the DL methods for 3D building reconstruction for the purpose of city modelling using EO data. This is the first review with a focus on object model generation based on the DL methods and EO data. A brief overview of the recent building reconstruction studies with DL is also given. We have investigated the different DL architectures, such as convolutional neural networks (CNNs), generative adversarial networks (GANs), and the combinations of conventional approaches with DL in this paper and reported their advantages and disadvantages. An outlook on the future developments of 3D building modelling based on DL is also presented.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Palma, V. « TOWARDS DEEP LEARNING FOR ARCHITECTURE : A MONUMENT RECOGNITION MOBILE APP ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W9 (31 janvier 2019) : 551–56. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w9-551-2019.

Texte intégral
Résumé :
<p><strong>Abstract.</strong> In recent years, the diffusion of large image datasets and an unprecedented computational power have boosted the development of a class of artificial intelligence (AI) algorithms referred to as deep learning (DL). Among DL methods, convolutional neural networks (CNNs) have proven particularly effective in computer vision, finding applications in many disciplines. This paper introduces a project aimed at studying CNN techniques in the field of architectural heritage, a still to be developed research stream. The first steps and results in the development of a mobile app to recognize monuments are discussed. While AI is just beginning to interact with the built environment through mobile devices, heritage technologies have long been producing and exploring digital models and spatial archives. The interaction between DL algorithms and state-of-the-art information modeling is addressed, as an opportunity to both exploit heritage collections and optimize new object recognition techniques.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
3

Comesaña Cebral, L. J., J. Martínez Sánchez, E. Rúa Fernández et P. Arias Sánchez. « HEURISTIC GENERATION OF MULTISPECTRAL LABELED POINT CLOUD DATASETS FOR DEEP LEARNING MODELS ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (30 mai 2022) : 571–76. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-571-2022.

Texte intégral
Résumé :
Abstract. Deep Learning (DL) models need big enough datasets for training, especially those that deal with point clouds. Artificial generation of these datasets can complement the real ones by improving the learning rate of DL architectures. Also, Light Detection and Ranging (LiDAR) scanners can be studied by comparing its performing with artificial point clouds. A methodology for simulate LiDAR-based artificial point clouds is presented in this work in order to get train datasets already labelled for DL models. In addition to the geometry design, a spectral simulation will be also performed so that all points in each cloud will have its 3 dimensional coordinates (x, y, z), a label designing which category it belongs to (vegetation, traffic sign, road pavement, …) and an intensity estimator based on physical properties as reflectance.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Cao, Y., et M. Scaioni. « LABEL-EFFICIENT DEEP LEARNING-BASED SEMANTIC SEGMENTATION OF BUILDING POINT CLOUDS AT LOD3 LEVEL ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (28 juin 2021) : 449–56. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-449-2021.

Texte intégral
Résumé :
Abstract. In recent research, fully supervised Deep Learning (DL) techniques and large amounts of pointwise labels are employed to train a segmentation network to be applied to buildings’ point clouds. However, fine-labelled buildings’ point clouds are hard to find and manually annotating pointwise labels is time-consuming and expensive. Consequently, the application of fully supervised DL for semantic segmentation of buildings’ point clouds at LoD3 level is severely limited. To address this issue, we propose a novel label-efficient DL network that obtains per-point semantic labels of LoD3 buildings’ point clouds with limited supervision. In general, it consists of two steps. The first step (Autoencoder – AE) is composed of a Dynamic Graph Convolutional Neural Network-based encoder and a folding-based decoder, designed to extract discriminative global and local features from input point clouds by reconstructing them without any label. The second step is semantic segmentation. By supplying a small amount of task-specific supervision, a segmentation network is proposed for semantically segmenting the encoded features acquired from the pre-trained AE. Experimentally, we evaluate our approach based on the ArCH dataset. Compared to the fully supervised DL methods, we find that our model achieved state-of-the-art results on the unseen scenes, with only 10% of labelled training data from fully supervised methods as input.
Styles APA, Harvard, Vancouver, ISO, etc.
5

He, H., K. Gao, W. Tan, L. Wang, S. N. Fatholahi, N. Chen, M. A. Chapman et J. Li. « IMPACT OF DEEP LEARNING-BASED SUPER-RESOLUTION ON BUILDING FOOTPRINT EXTRACTION ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (30 mai 2022) : 31–37. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-31-2022.

Texte intégral
Résumé :
Abstract. Automated building footprints extraction from High Spatial Resolution (HSR) remote sensing images plays important roles in urban planning and management, and hazard and disease control. However, HSR images are not always available in practice. In these cases, super-resolution, especially deep learning (DL)-based methods, can provide higher spatial resolution images given lower resolution images. In a variety of remote sensing applications, DL based super-resolution methods are widely used. However, there are few studies focusing on the impact of DL-based super-resolution on building footprint extraction. As such, we present an exploration of this topic. Specifically, we first super-resolve the Massachusetts Building Dataset using bicubic interpolation, a pre-trained Super-Resolution CNN (SRCNN), a pre-trained Residual Channel Attention Network (RCAN), a pre-trained Residual Feature Aggregation Network (RFANet). Then, using the dataset under its original resolution, as well as the four different super-resolutions of the dataset, we employ the High-Resolution Network (HRNet) v2 to extract building footprints. Our experiments show that super-resolving either training or test datasets using the latest high-performance DL-based super-resolution method can improve the accuracy of building footprints extraction. Although SRCNN based building footprint extraction gives the highest Overall Accuracy, Intersection of Union and F1 score, we suggest using the latest super-resolution method to process images before building footprint extraction due to the fixed scale ratio of pre-trained SRCNN and low speed of convergence in training.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Nurunnabi, A., F. N. Teferle, D. F. Laefer, F. Remondino, I. R. Karas et J. Li. « kCV-B : BOOTSTRAP WITH CROSS-VALIDATION FOR DEEP LEARNING MODEL DEVELOPMENT, ASSESSMENT AND SELECTION ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W3-2022 (2 décembre 2022) : 111–18. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w3-2022-111-2022.

Texte intégral
Résumé :
Abstract. This study investigates the inability of two popular data splitting techniques: train/test split and k-fold cross-validation that are to create training and validation data sets, and to achieve sufficient generality for supervised deep learning (DL) methods. This failure is mainly caused by their limited ability of new data creation. In response, the bootstrap is a computer based statistical resampling method that has been used efficiently for estimating the distribution of a sample estimator and to assess a model without having knowledge about the population. This paper couples cross-validation and bootstrap to have their respective advantages in view of data generation strategy and to achieve better generalization of a DL model. This paper contributes by: (i) developing an algorithm for better selection of training and validation data sets, (ii) exploring the potential of bootstrap for drawing statistical inference on the necessary performance metrics (e.g., mean square error), and (iii) introducing a method that can assess and improve the efficiency of a DL model. The proposed method is applied for semantic segmentation and is demonstrated via a DL based classification algorithm, PointNet, through aerial laser scanning point cloud data.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Nurunnabi, A., F. N. Teferle, J. Li, R. C. Lindenbergh et A. Hunegnaw. « AN EFFICIENT DEEP LEARNING APPROACH FOR GROUND POINT FILTERING IN AERIAL LASER SCANNING POINT CLOUDS ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2021 (28 juin 2021) : 31–38. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2021-31-2021.

Texte intégral
Résumé :
Abstract. Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Rehman, Amir, Muhammad Azhar Iqbal, Huanlai Xing et Irfan Ahmed. « COVID-19 Detection Empowered with Machine Learning and Deep Learning Techniques : A Systematic Review ». Applied Sciences 11, no 8 (10 avril 2021) : 3414. http://dx.doi.org/10.3390/app11083414.

Texte intégral
Résumé :
COVID-19 has infected 223 countries and caused 2.8 million deaths worldwide (at the time of writing this article), and the death rate is increasing continuously. Early diagnosis of COVID patients is a critical challenge for medical practitioners, governments, organizations, and countries to overcome the rapid spread of the deadly virus in any geographical area. In this situation, the previous epidemic evidence on Machine Learning (ML) and Deep Learning (DL) techniques encouraged the researchers to play a significant role in detecting COVID-19. Similarly, the rising scope of ML/DL methodologies in the medical domain also advocates its significant role in COVID-19 detection. This systematic review presents ML and DL techniques practiced in this era to predict, diagnose, classify, and detect the coronavirus. In this study, the data was retrieved from three prevalent full-text archives, i.e., Science Direct, Web of Science, and PubMed, using the search code strategy on 16 March 2021. Using professional assessment, among 961 articles retrieved by an initial query, only 40 articles focusing on ML/DL-based COVID-19 detection schemes were selected. Findings have been presented as a country-wise distribution of publications, article frequency, various data collection, analyzed datasets, sample sizes, and applied ML/DL techniques. Precisely, this study reveals that ML/DL technique accuracy lay between 80% to 100% when detecting COVID-19. The RT-PCR-based model with Support Vector Machine (SVM) exhibited the lowest accuracy (80%), whereas the X-ray-based model achieved the highest accuracy (99.7%) using a deep convolutional neural network. However, current studies have shown that an anal swab test is super accurate to detect the virus. Moreover, this review addresses the limitations of COVID-19 detection along with the detailed discussion of the prevailing challenges and future research directions, which eventually highlight outstanding issues.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Caglar, Ozgur, Erdem Karadeniz, Irem Ates, Sevilay Ozmen et Mehmet Dumlu Aydin. « Vagosympathetic imbalance induced thyroiditis following subarachnoid hemorrhage : a preliminary study ». Journal of Research in Clinical Medicine 8, no 1 (6 mai 2020) : 17. http://dx.doi.org/10.34172/jrcm.2020.017.

Texte intégral
Résumé :
Introduction: This preliminary study evaluates the possible responsibility of ischemia-induced vagosympathetic imbalances following subarachnoid hemorrhage (SAH), for the onset of autoimmune thyroiditis. Methods: Twenty-two rabbits were chosen from our former experimental animals, five of which were picked from healthy rabbits as control (nG-I=5). Sham group (nG-II=5) and animals with thyroid pathologies (nG-III=12) were also included after a one-month-long experimental SAH follow-up. Thyroid hormone levels were measured weekly, and animals were decapitated. Thyroid glands, superior cervical ganglia, and intracranial parts of vagal nerve sections obtained from our tissue archives were reexamined with routine/immunohistochemical methods. Thyroid hormone levels, hormone-filled total follicle volumes (TFVs) per cubic millimeter, degenerated neuron density (DND) of vagal nuclei and neuron density of superior cervical ganglia were measured and statistically compared. Results: The mean neuron density of both superior cervical ganglia was estimated as 8230±983/ mm3 in study group animals with severe thyroiditis, 7496±787/mm3 in the sham group and 6416±510/mm3 in animals with normal thyroid glands. In control group (group I), T3 was 107±11 μg/dL, T4: 1,43±0.32 μg/dL and TSH <0.5, while mean TFV was 43%/mm3 and DND of vagal nuclei was 3±1/mm3. In sham group (group II), T3 was 96±11 μg/dL, T4: 1.21±0.9 μg/ dL and TSH>0.5 while TFV was 38%/mm3 and DND of vagal nuclei was 13±4. In study group, T3 was 54±8 μg/dL, T4: 1,07±0.3 μg/dL and TSH >0.5, while TFV was 27%/mm3 and DND of vagal nuclei was 42±9/mm3. Conclusion: Sympathovagal imbalance characterized by relative sympathetic hyperactivity based on vagal insufficiency should be considered as a new causative agent for hypothyroidism.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Andrade, R. B., G. A. O. P. Costa, G. L. A. Mota, M. X. Ortega, R. Q. Feitosa, P. J. Soto et C. Heipke. « EVALUATION OF SEMANTIC SEGMENTATION METHODS FOR DEFORESTATION DETECTION IN THE AMAZON ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (22 août 2020) : 1497–505. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-1497-2020.

Texte intégral
Résumé :
Abstract. Deforestation is a wide-reaching problem, responsible for serious environmental issues, such as biodiversity loss and global climate change. Containing approximately ten percent of all biomass on the planet and home to one tenth of the known species, the Amazon biome has faced important deforestation pressure in the last decades. Devising efficient deforestation detection methods is, therefore, key to combat illegal deforestation and to aid in the conception of public policies directed to promote sustainable development in the Amazon. In this work, we implement and evaluate a deforestation detection approach which is based on a Fully Convolutional, Deep Learning (DL) model: the DeepLabv3+. We compare the results obtained with the devised approach to those obtained with previously proposed DL-based methods (Early Fusion and Siamese Convolutional Network) using Landsat OLI-8 images acquired at different dates, covering a region of the Amazon forest. In order to evaluate the sensitivity of the methods to the amount of training data, we also evaluate them using varying training sample set sizes. The results show that all tested variants of the proposed method significantly outperform the other DL-based methods in terms of overall accuracy and F1-score. The gains in performance were even more substantial when limited amounts of samples were used in training the evaluated methods.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "DL. Archives"

1

Cubells, Puertes María José. « La classificació de la documentació parlamentària de Les Corts Valencianes : la funció de control al Consell ». Thesis, Universitat de València, 2008. http://eprints.rclis.org/14243/1/1_tesi.pdf.

Texte intégral
Résumé :
The aim of this thesis is to develop a framework for the classification of documents, as a basic tool for the records management system of the Valencian Parliament, with a view to facilitate the administrative management, the decision making and the retrieval of information.The methodology employed understands two stages: a review of the history of the Institution, its policies, procedure and function and, secondly, an actualised bibliographical review about the doctrine relative to the parliamentary control of government, and archival theories about document classification.The result is a functional table for document classification which is homogenous for the entire institution and allows adequate control, protection and records management, the standardization and simplification of procedure and the optimal exploitation of information from the Chamber.In conclusion, the system designed has been seen to be valid for the institution, achieving the expected goals in affording access to all users, including managers, politicians and researchers
Styles APA, Harvard, Vancouver, ISO, etc.
2

Freitas, Cristiana. « A autenticidade dos objectos digitais ». Thesis, 2011. http://eprints.rclis.org/16222/3/autenticidade_digitais.pdf.

Texte intégral
Résumé :
In the current context of the Information Society, taking into account the central and important role of information as a dynamic motor of organizations, one of the biggest challenges facing information professionals, especially those who play functions in archives, is the creation, maintenance and long-term preservation of authentic electronic records. Thus, the analysis of dynamic, organic and functional context of organizations and the content analysis - fundamental methodological operation in the construction of scientific knowledge about information - must be based on the principles, concepts and methods of Archival Science – discipline into the scientific area of Information Science – and Contemporary Diplomatics, in order to provide the organization an understanding of administrative, legal, social and functional contexts in which it operates, to be able to identify the main factors influencing the need to create and maintain records and also to determine the requirements that ensure the authenticity, trustworthiness, integrity, significance and usability of the information produced as well as the basis for its long-term preservation. To this end, it becomes necessary to emphasize the definition of authenticity in order to reach a consensual definition of authentic digital object and obtain their essential elements. Considering that Archives have the mission to ensure the preservation and provide access to electronic records of continuing value in their context of creation over time, it is crucial the early intervention of their professionals in the design and implementation of archival systems to ensure that all electronic records with secondary value, produced by a system, are preserved as authentic, reliable, understandable and usable. Thus, by the light of a new post-custodial, dynamical, informational and scientific paradigma, information professionals have today a new role as information managers acting in any organic context that produces informational flow.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Steffenhagen, Björn. « Erstellung eines Web 2.0-Konzeptes in Archiven am Beispiel des Historischen Archivs des Ostdeutschen Sparkassenverbandes ». Thesis, 2015. http://eprints.rclis.org/39789/1/Bachelorarbeit_Steffenhagen.pdf.

Texte intégral
Résumé :
The use of social media has also become an integral part of archiving. Many archives already have a Facebook profile or maintain their own blog. However, only a few make use of the full potential of Web 2.0. A concept for the strategic use of social media applications has so far been a desideratum in archival science. In this thesis such a Web 2.0 concept is created. The aim of the thesis is to answer the question of how archives can find out which Web 2.0 applications are best suited for the implementation of their work and how these are effectively implemented in the archive work. This will be illustrated using the example of the historical archive of the Ostdeutscher Sparkassenverband. At the beginning the term Web 2.0 is clarified and the most important applications are shown. This is followed by the development of the current state of research on social media applications in archives and the presentation of the practical example. The determination of the current state of the archive in the field of social media is followed by the definition of the objectives. For this purpose, the specific strengths and weaknesses of this business archive are highlighted with the help of various business management instruments. In addition, it is shown which environmental factors and participants the archive has to consider. The STEP-, SWOT- and stakeholder-analyses form the strategic basis for the discussion of the existing weblog, the introduction of a Facebook profile as well as the design of appearances at the services SlideShare and Open Gallery and the establishment of a Wiki. The practice-oriented part of the work explains the introduction and operation of the previously selected Web 2.0 applications. It also shows which marketing strategies are best suited for certain platforms and which personnel resources should be planned for this. The final part of the concept is the elaboration of success measurement. For this purpose, quantitative key figures are collected, and the objectives are then discussed. The conclusion from this elaboration is that archives should not use social media indiscriminately. Rather, one's own situation should be analyzed in order to develop individual strategies for which applications are suitable to address specific users.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hipfinger, Andrea. « Katalogisierung unter den Gesichtspunkten der BenutzerInnenfreundlichkeit und Kompatibilität : am Beispiel des Österreichischen Literaturarchivs der Österreichischen Nationalbibliothek und der Handschriftensammlung der Wiener Stadt- und Landesbibliothek im Vergleich mit einem internationalen Beispiel (Manuscript Collections der British Library) ». Thesis, 2005. http://eprints.rclis.org/6479/1/hipfinger_diplomarbeit.pdf.

Texte intégral
Résumé :
The study aims to clarify whether or not the method of cataloguing used by the Handschriftensammlung of the Wiener Stadt- und Landesbibliothek and the Österreichisches Literaturarchiv of the Österreichische Nationalbibliothek fulfil the needs of the user and if the information being provided by the data base is enough. Last but not least the question of using standards is very important, because otherwise the library cannot become a member of cooperation, g.ex. MALVINE or LEAF. The research methods have taken a combined approach that has included both quantitative and qualitative research methods. On the one hand interviews were conducted to find out, what the users want and on the other hand the information provided by the data base was analysed. The results of the interviews show that the users are mostly satisfied with the information. There were some things mentioned which could certainly be improved. The conclusion was drawn that there are a number of benefits for the libraries if they take part in cooperation. Recommendations for further actions have been made by proposing a catalogue of measures on how the databases should be changed. Furthermore the fact that cooperations are considered as important by users should receive attention.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Martínez-Carmona, María. « Experto Universitario en Información y Documentación, Archivos y Protección de Datos ». Thesis, 2018. http://eprints.rclis.org/39286/1/Martinez_Carmona_Maria_TFG_Final.pdf.

Texte intégral
Résumé :
This research analyses key aspects of the academic management and current regulations that are needed for the development and implementation of a special certificate entitled University Expert in Information and Documentation, Archives and Data Protection by the University of Las Palmas de Gran Canaria (ULPGC). We study a number of threats and opportunities that justify the creation of this certificate. First, this certificate is not included in the official studies of the ULPGC. Secondly, it would meet the needs of new professionals that the labour market demand in these three specialties and, thirdly, it updates the training of university graduates in order to face the technological changes and the evolution of the socio-economic environment. Next, we develop a syllabus, which consists of five common subjects in the first year and six subjects for each specialty in the second year ending with the completion of an internship and a final degree dissertation. In conclusion, it is a special certificate taught online, which includes three different formative specializations in Information and Documentation, in Archives and Electronic Administration and in Data Protection. In addition, we specified the competences and skills that the person who passed this academic education must fulfil. Finally, the procedure and economic management of this proposal is examined. Tutor: Joan Isidre Badell Guijarro Responsible professors: Alexandre López-Borrull, Núria Ferran-Ferrer & Josep Vives-Gràcia
Styles APA, Harvard, Vancouver, ISO, etc.
6

Adamo, Sara. « Il Fondo Benussi conservato presso la Biblioteca Centrale dell'Università degli Studi di Milano Bicocca : inventariazione ed implementazione di una digital library ». Thesis, 2005. http://eprints.rclis.org/6017/1/tesi_Sara.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Brechmacher, Janna. « Informationseinrichtungen im Bereich Film. Ein Überblick ». Thesis, 2002. http://eprints.rclis.org/8427/1/DA_brechmacher.pdf.

Texte intégral
Résumé :
This thesis gives an overview on German film libraries, film archives, and other libraries with film information stock. Questionnaires were sent out to different institutions that deal with film in a broader sense. The methodic approaches for this study as well as the overview of film information stock in Germany, resulting from this analysis, are well documented. A number of non-German film institutions have been surveyed and described to compare handling of information stock in Germany and abroad. The presentation of informational institutions is followed by the attempt to systemize the different types of institutions and their various focal points in stock.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Rubio, Villaró Cristian. « Archivos literarios en Barcelona y su área metropolitana : Guía de fondos personales de escritores en centros públicos ». Thesis, 2013. http://eprints.rclis.org/22517/1/%28ARCHIVOS%20LITERARIOS%20EN%20BARCELONA%20Y%20SU%20%C3%81REA%20METROPOLITANA%29.pdf.

Texte intégral
Résumé :
The first part of the work, which has been divided into 10 parts, has been set aside to cover the following: - To define the writers’ personal archives - To summarize their story - To carry out a brief conceptual analysis of the terms “literary archives” and “writers’ personal archives” - To explain the appreciation in value over the years of these personal archives - To describe the prevailing legal framework - To show the leading classification tables - To discuss the debates which emerge in relation to the end use personal archives (archives or libraries) - To defend the usefulness of a guide of this kind to aid investigators - To detail the methodology applied to produce this work The second part has been strictly set aside for the guide and for the files produced using the information gathered
Styles APA, Harvard, Vancouver, ISO, etc.
9

Alonso, Lucia, Luis Noble et Ignacio Saraiva. « Discusiones epistemológicas en torno a la cientificidad de la Archivología ». Thesis, 2015. http://eprints.rclis.org/32379/1/Discusiones%20epistmol%C3%B3gicas%20en%20torno%20a%20la%20cientificidad%20de%20la%20Archivolog%C3%ADa_%20Alonso_Noble_Saraiva.pdf.

Texte intégral
Résumé :
This research is based on the premise that the strategies raised in defending the scientific status of the archival science are poorly made. It will not reach a status of science through the strategies of establishing paradigms, an object, a method, a purpose and a terminological consensus inside the field. But its scientific must be based before in the expansion of its theoretical and methodological elements. The archival discipline in their aspirations of clarify their scientific claims is immersed in ongoing epistemological debate. Under these debate is that has sought to apply the classical models of the philosophy of science emerged in the 20th century, such as the models of the Vienna Circle, Feyerabend, Lakatos, Popper, Kuhn, etc Among These studies, are stand out the efforts made by the authors, to consolidate the scientific discipline from the model of Thomas S. Kuhn. These positions, developed inside the field, are discussed in the present investigation So, is presented the discussion in lathe to the question about whether the establishment of paradigms, object, method and purpose, and a terminological consensus of the archival discipline, is essential to clarify their scientific claims. For this, it carries out a brief historical review of the evolution of the field, and a review of the literature on the different existing positions within the discipline, when it comes to substantiate its scientific status. Identifying three main argumentative lines: the identification of the archival discipline paradigms; the establishment of the method, the object and purpose; and the terminology within the archival discipline. Of the issues raised above, is that, the majority positions within the archival discipline, subscribe to the idea that it is imperative to establish some of the three aforementioned argumentative lines to consolidate itself as a science. Therefore the basis of epistemological analysis of this work is to use some of the arguments provided by Thomas S. Kuhn, in order to identify and elucidate the problems that take place within the archival discipline, when implementing its claims of scientific. Finally, it is considered that the establishment of paradigms, an object, a method and a purpose, as well as terminological consensus within the discipline, is not necessary to determine the scientific area. At the same time, it means wrong to resort to philosophical models to clarify the scientific status of the discipline, as sciences that have been consolidated, have not done so using these strategies, but by the establishment of a successful tradition of solving problems.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Mohapatra, Niranjan, et Projes Roy. « Digital Archival Initiative of Self Financed Institutions in Greater Noida : A Study ». Thesis, 2015. http://eprints.rclis.org/43671/1/Digital%20Archival%20Initiative%20of%20Self-Finance%20Institutions%20in%20Greater%20Noida.pdf.

Texte intégral
Résumé :
The invention of new technologies has been fulfilled the dreams of human life. After the advent of Information Technology (IT) the situations began changed, the printed information started to be digitized and made available to use through the help of computer devices and networks. The pace of globalization and the growth of new technologies, such as the internet, have been changing the teaching learning methods in schools, colleges, universities and research institutes. Digitization is a labor-intensive process by which physical or manual records are converted into digital form. Data digitization services offer a very good opportunity for India, due to the relatively lower costs and technical skills available. In the Greater Noida institutional area only 20% libraries have been using digital library software, 80% libraries have not till used; they are using only library automation software. All of them 60% have an under processing digital library where just 40% have an existing digital library. Their digital archrivals have been initiated by the library staffs with support of the IT staffs. A few libraries have used cooperation of resource sharing where none of them have used the support of an expert for their initiative. Almost 85% users have already benefited by the digital resources. The study found that digital resources are more useful than the printed resourced in the different stages in research and almost users need their libraries to be fully digitalized.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "DL. Archives"

1

McCarthy, Cavan. « Digital Library Structure and Software ». Dans Software Applications, 1742–49. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-060-8.ch102.

Texte intégral
Résumé :
Digital libraries (DL) can be characterized as the “high end” of the Internet, digital systems which offer significant quantities of organized, selected materials of the type traditionally found in libraries, such as books, journal articles, photographs and similar documents (Schwartz, 2000). They normally offer quality resources based on the collections of well-known institutions, such as major libraries, archives, historical and cultural associations (Love & Feather, 1998). The field of digital libraries is now firmly established as an area of study, with textbooks (Arms, 2000; Chowdhury & Chowdhury, 2003; Lesk, 1997); electronic journals from the US (D-Lib Magazine: http://www.dlib.org/) and the UK (Ariadne: http://www.ariadne.ac.uk/); even encyclopedia articles (McCarthy, 2004).
Styles APA, Harvard, Vancouver, ISO, etc.
2

McCarthy, Cavan. « Digital Library Structure and Software ». Dans Encyclopedia of Developing Regional Communities with Information and Communication Technology, 193–98. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-575-7.ch034.

Texte intégral
Résumé :
Digital libraries (DL) can be characterized as the “high end” of the Internet, digital systems which offer significant quantities of organized, selected materials of the type traditionally found in libraries, such as books, journal articles, photographs and similar documents (Schwartz, 2000). They normally offer quality resources based on the collections of well-known institutions, such as major libraries, archives, historical and cultural associations (Love & Feather, 1998). The field of digital libraries is now firmly established as an area of study, with textbooks (Arms, 2000; Chowdhury & Chowdhury, 2003; Lesk, 1997); electronic journals from the US (D-Lib Magazine: http://www.dlib.org/) and the UK (Ariadne: http://www.ariadne.ac.uk/); even encyclopedia articles (McCarthy, 2004).
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sukheja, Deepak, T. Sunil Kumar, B. V. Kiranmayee, Malaya Nayak et Durgesh Mishra. « Prediction of Skin lesions (Melanoma) using Convolutional Neural Networks ». Dans Emerging Computational Approaches in Telehealth and Telemedicine : A Look at The Post-COVID-19 Landscape, 43–69. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/9789815079272122010005.

Texte intégral
Résumé :
Nowadays, computational technology is given great importance in the health care system to understand the importance of advanced computational technologies. Skin cancer or skin disease (melanoma) has been considered in this chapter. As we know, the detection of skin lesions caused by exposure to UV rays over the human body would be a difficult task for doctors to diagnose in the initial stages due to the low contrast of the affected portion of the body. Early prediction campaigns are expected to diminish the incidence of new instances of melanoma by lessening the populace's openness to sunlight. While beginning phase forecast campaigns have ordinarily been aimed at whole campaigns or the public, regardless of the real dangers of disease among people, most specialists prescribe that melanoma reconnaissance be confined to patients who are in great danger of disease. The test for specialists is the way to characterise a patient's real danger of melanoma since none of the rules, in actuality, throughout the communities offer an approved algorithm through which melanoma risk may be assessed. The main objective of this chapter is to describe the employment of the deep learning (DL) approach to predict melanoma at an early stage. The implemented approach uses a novel hair removal algorithm for preprocessing. The k.means clustering technique and the CNN architecture are then used to differentiate between normal and abnormal skin lesions. The approach is tested using the ISIC International Skin Imaging Collaboration Archive set, which contains different images of melanoma and non-melanoma.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "DL. Archives"

1

Mocko, Gregory M., David W. Rosen et Farrokh Mistree. « Decision Retrieval and Storage Enabled Through Description Logic ». Dans ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35644.

Texte intégral
Résumé :
The problem addressed in the paper is how to represent the knowledge associated with design decision models to enable storage, retrieval, and reuse. The paper concerns the representations and reasoning mechanisms needed to construct decision models of relevance to engineered product development. Specifically, AL[E][N] description logic is proposed as a formalism for modeling engineering knowledge and for enabling retrieval and reuse of archived models. Classification hierarchies are constructed using subsumption in DL. Retrieval of archived models is supported using subsumption and query concepts. In our methodology, design decision models are constructed using the base vocabulary and reuse is supported through reasoning and retrieval capabilities. Application of the knowledge representation for the design of a cantilever beam is demonstrated.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie