Literatura académica sobre el tema "Automatic data structuring"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Automatic data structuring".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Automatic data structuring"

1

Yermukhanbetova, Sharbanu y Gulnara Bektemyssova. "AUTOMATIC MERGING AND STRUCTURING OF DATA FROM DIFFERENT CATALOGS". JP Journal of Heat and Mass Transfer, Special (4 de junio de 2020): 7–11. http://dx.doi.org/10.17654/hmsi120007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Xiong, Wei, Chung-Mong Lee y Rui-Hua Ma. "Automatic video data structuring through shot partitioning and key-frame computing". Machine Vision and Applications 10, n.º 2 (1 de junio de 1997): 51–65. http://dx.doi.org/10.1007/s001380050059.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pryhodinuk, V. V., Yu A. Tymchenko, M. V. Nadutenko y A. Yu Gordieiev. "Automated data processing for evaluation the hydrophysical state of the Black Sea water areas". Oceanographic Journal (Problems, methods and facilities for researches of the World Ocean), n.º 2(13) (22 de abril de 2020): 114–29. http://dx.doi.org/10.37629/2709-3972.2(13).2020.114-129.

Texto completo
Resumen
The article explores the issues of collecting, structuring and displaying oceanographic information from spatially distributed sources. The aim of the work was to develop services for an intelligent information system (IIS) designed to assess the hydrophysical state of the Black Sea waters by creating a library of ontological descriptions of the processing and displaying information in the IIS software environment. Article describes approaches to the creation of an automated data processing system for the assessment of the hydrophysical state of the Black Sea using the method of recursive reduction. The information about the main functions of IIS for displaying structured data for illuminating the hydrophysical situation is presented. To solve such a problem, a set of cognitive services built on the basis of cognitive IT platforms to ensure the processes of automatic and automated collection of oceanographic data, their structuring and presentation to the user in an interactive form was applied for the first time. The results of the work can be used during the development of an analytical system for the automation of scientific and applied problems associated with the use of operational oceanographic data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Yu, Haiyang, Shuai Yang, Zhihai Wu y Xiaolei Ma. "Vehicle trajectory reconstruction from automatic license plate reader data". International Journal of Distributed Sensor Networks 14, n.º 2 (febrero de 2018): 155014771875563. http://dx.doi.org/10.1177/1550147718755637.

Texto completo
Resumen
Using perception data to excavate vehicle travel information has been a popular area of study. In order to learn the vehicle travel characteristics in the city of Ruian, we developed a common methodology for structuring travelers’ complete information using the travel time threshold to recognize a single trip based on the automatic license plate reader data and built a trajectory reconstruction model integrated into the technique for order preference by similarity to an ideal solution and depth-first search to manage the vehicles’ incomplete records phenomenon. In order to increase the practicability of the model, we introduced two speed indicators associated with actual data and verified the model’s reliability through experiments. Our results show that the method would be affected by the number of missing records. The model and results of this work will allow us to further study vehicles’ commuting characteristics and explore hot trajectories.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Dovgal, Sofiia, Egor Mukhaev, Marat Sabitov y Lyubov' Adamcevich. "Development of a web service for processing data from electronic images of urban plans of land plots". Construction and Architecture 11, n.º 1 (24 de marzo de 2023): 17. http://dx.doi.org/10.29039/2308-0191-2022-11-1-17-17.

Texto completo
Resumen
The article gives an idea of the content of urban planning plans for land plots (UPPLP), their purpose, as well as the relevance of developing a service for automatic recognition of data from an electronic image of a document. The existing services for automatic processing of documents are analyzed, and a technical solution developed by the authors is presented in the form of a web service for parsing and structuring electronic images of UPPLP. The description of the structure and operation of the web service, as well as the data conversion algorithm implemented in the solution is given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kopyrin, Andrey Sergeevich y Irina Leonidovna Makarova. "Algorithm for preprocessing and unification of time series based on machine learning for data structuring". Программные системы и вычислительные методы, n.º 3 (marzo de 2020): 40–50. http://dx.doi.org/10.7256/2454-0714.2020.3.33958.

Texto completo
Resumen
The subject of the research is the process of collecting and preliminary preparation of data from heterogeneous sources. Economic information is heterogeneous and semi-structured or unstructured in nature. Due to the heterogeneity of the primary documents, as well as the human factor, the initial statistical data may contain a large amount of noise, as well as records, the automatic processing of which may be very difficult. This makes preprocessing dynamic input data an important precondition for discovering meaningful patterns and domain knowledge, and making the research topic relevant.Data preprocessing is a series of unique tasks that have led to the emergence of various algorithms and heuristic methods for solving preprocessing tasks such as merge and cleanup, identification of variablesIn this work, a preprocessing algorithm is formulated that allows you to bring together into a single database and structure information on time series from different sources. The key modification of the preprocessing method proposed by the authors is the technology of automated data integration.The technology proposed by the authors involves the combined use of methods for constructing a fuzzy time series and machine lexical comparison on the thesaurus network, as well as the use of a universal database built using the MIVAR concept.The preprocessing algorithm forms a single data model with the ability to transform the periodicity and semantics of the data set and integrate data that can come from various sources into a single information bank.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Willot, L., D. Vodislav, L. De Luca y V. Gouet-Brunet. "AUTOMATIC STRUCTURING OF PHOTOGRAPHIC COLLECTIONS FOR SPATIO-TEMPORAL MONITORING OF RESTORATION SITES: PROBLEM STATEMENT AND CHALLENGES". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-2/W1-2022 (25 de febrero de 2022): 521–28. http://dx.doi.org/10.5194/isprs-archives-xlvi-2-w1-2022-521-2022.

Texto completo
Resumen
Abstract. Over the last decade, a large number of digital documentation projects have demonstrated the potential of image-based modelling of heritage objects in the context of documentation, conservation, and restoration. The inclusion of these emerging methods in the daily monitoring of the activities of a heritage restoration site (context in which hundreds of photographs per day can be acquired by multiple actors, in accordance with several observation and analysis needs) raises new questions at the intersection of big data management, analysis, semantic enrichment, and more generally automatic structuring of this data. In this article we propose a data model developed around these questions and identify the main challenges to overcome the problem of structuring massive collections of photographs through a review of the available literature on similarity metrics used to organise the pictures based on their content or metadata. This work is realized in the context of the restoration site of the Notre-Dame de Paris cathedral that will be used as the main case study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Galauskis, Maris y Arturs Ardavs. "The Process of Data Validation and Formatting for an Event-Based Vision Dataset in Agricultural Environments". Applied Computer Systems 26, n.º 2 (1 de diciembre de 2021): 173–77. http://dx.doi.org/10.2478/acss-2021-0021.

Texto completo
Resumen
Abstract In this paper, we describe our team’s data processing practice for an event-based camera dataset. In addition to the event-based camera data, the Agri-EBV dataset contains data from LIDAR, RGB, depth cameras, temperature, moisture, and atmospheric pressure sensors. We describe data transfer from a platform, automatic and manual validation of data quality, conversions to multiple formats, and structuring of the final data. Accurate time offset estimation between sensors achieved in the dataset uses IMU data generated by purposeful movements of the sensor platform. Therefore, we also outline partitioning of the data and time alignment calculation during post-processing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kang, Tian, Shaodian Zhang, Youlan Tang, Gregory W. Hruby, Alexander Rusanov, Noémie Elhadad y Chunhua Weng. "EliIE: An open-source information extraction system for clinical trial eligibility criteria". Journal of the American Medical Informatics Association 24, n.º 6 (1 de abril de 2017): 1062–71. http://dx.doi.org/10.1093/jamia/ocx019.

Texto completo
Resumen
Abstract Objective To develop an open-source information extraction system called Eligibility Criteria Information Extraction (EliIE) for parsing and formalizing free-text clinical research eligibility criteria (EC) following Observational Medical Outcomes Partnership Common Data Model (OMOP CDM) version 5.0. Materials and Methods EliIE parses EC in 4 steps: (1) clinical entity and attribute recognition, (2) negation detection, (3) relation extraction, and (4) concept normalization and output structuring. Informaticians and domain experts were recruited to design an annotation guideline and generate a training corpus of annotated EC for 230 Alzheimer’s clinical trials, which were represented as queries against the OMOP CDM and included 8008 entities, 3550 attributes, and 3529 relations. A sequence labeling–based method was developed for automatic entity and attribute recognition. Negation detection was supported by NegEx and a set of predefined rules. Relation extraction was achieved by a support vector machine classifier. We further performed terminology-based concept normalization and output structuring. Results In task-specific evaluations, the best F1 score for entity recognition was 0.79, and for relation extraction was 0.89. The accuracy of negation detection was 0.94. The overall accuracy for query formalization was 0.71 in an end-to-end evaluation. Conclusions This study presents EliIE, an OMOP CDM–based information extraction system for automatic structuring and formalization of free-text EC. According to our evaluation, machine learning-based EliIE outperforms existing systems and shows promise to improve.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Koshman, Varvara, Anastasia Funkner y Sergey Kovalchuk. "An Unsupervised Approach to Structuring and Analyzing Repetitive Semantic Structures in Free Text of Electronic Medical Records". Journal of Personalized Medicine 12, n.º 1 (1 de enero de 2022): 25. http://dx.doi.org/10.3390/jpm12010025.

Texto completo
Resumen
Electronic medical records (EMRs) include many valuable data about patients, which is, however, unstructured. Therefore, there is a lack of both labeled medical text data in Russian and tools for automatic annotation. As a result, today, it is hardly feasible for researchers to utilize text data of EMRs in training machine learning models in the biomedical domain. We present an unsupervised approach to medical data annotation. Syntactic trees are produced from initial sentences using morphological and syntactical analyses. In retrieved trees, similar subtrees are grouped using Node2Vec and Word2Vec and labeled using domain vocabularies and Wikidata categories. The usage of Wikidata categories increased the fraction of labeled sentences 5.5 times compared to labeling with domain vocabularies only. We show on a validation dataset that the proposed labeling method generates meaningful labels correctly for 92.7% of groups. Annotation with domain vocabularies and Wikidata categories covered more than 82% of sentences of the corpus, extended with timestamp and event labels 97% of sentences got covered. The obtained method can be used to label EMRs in Russian automatically. Additionally, the proposed methodology can be applied to other languages, which lack resources for automatic labeling and domain vocabulary.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Automatic data structuring"

1

Blettery, Emile. "Structuring heritage iconographic collections : from automatic interlinking to semi-automatic visual validation". Electronic Thesis or Diss., Université Gustave Eiffel, 2024. http://www.theses.fr/2024UEFL2001.

Texto completo
Resumen
Cette thèse explore des approches de structuration automatique et semi-automatique pour les collections de contenus iconographiques patrimoniaux. La structuration et l'exploitation de tels contenus pourrait s'avérer bénéfique pour de nombreuses applications, du tourisme virtuel à un accès facilité pour les chercheurs et le grand public. Cependant, l'organisation "en silo" inhérente à ces collections entrave les approches de structuration automatique et toutes les applications subséquentes.La communauté de la vision par ordinateur a proposé de nombreuses méthodes automatiques pour l'indexation (et la structuration) de collections d'images à grande échelle. Exploitant l'aspect visuel des contenus, elles fonctionnent indépendamment des structures de métadonnées qui organisent principalement les collections patrimoniales, apparaissant ainsi comme une solution potentielle au problème de liage entre les structures uniques des différentes collections. Cependant, ces méthodes sont généralement entrainées sur de grands jeux d'images récentes ne reflétant pas la diversité visuelle des contenus patrimoniaux. Cette thèse vise à évaluer et à améliorer ces méthodes automatiques pour la structuration des contenus iconographiques patrimoniaux. Pour cela, cette thèse apporte trois différentes contributions avec l'objectif commun d'assurer une certaine explicabilité des méthodes évaluées et proposées, nécessaire pour justifier de leur pertinence et faciliter leur adaptation à de nouvelles acquisitions. La première contribution est une évaluation des approches automatiques de recherche d'images basée sur le contenu, confrontées aux différents types de données du patrimoine iconographique. Cette évaluation se concentre d'abord sur les descripteurs d'images de l'étape de recherche d'images, puis sur les méthodes de ré-ordonnancement qui réorganisent ensuite les images similaires en fonction d'un autre critère. Les approches les plus pertinentes peuvent alors être sélectionnées pour la suite tandis que celles qui ne le sont pas fournissent des informations inspirant notre deuxième contribution. La deuxième contribution consiste en trois nouvelles méthodes de ré-ordonnancement exploitant des informations spatiales plus ou moins globales pour réévaluer les liens de similarité visuelle créés par l'étape de recherche d'images. La première exploite les premières images retrouvées pour créer une scène 3D approximative dans laquelle les images retrouvées sont positionnées pour évaluer leur cohérence dans la scène. La deuxième simplifie la première avec une expansion de requête géométrique, c'est-à-dire en agrégeant des informations géométriques 2D issues des images récupérées pour encoder plus largement la géométrie de la scène sans la reconstruire (ce qui est couteux en temps de calcul). Enfin, la troisième exploite des informations de position plus globales, à l'échelle du jeu d'images, pour estimer la cohérence entre la similarité visuelle entre images et leur proximité spatiale. La troisième et dernière contribution est un processus semi-automatique de validation visuelle et de correction manuelle de la structuration d'une collection. Ce cadre exploite les approches automatiques les plus adaptées et une plateforme de visualisation basée sur une représentation en graphes. Nous utilisons plusieurs indices visuels pour orienter l'intervention manuelle de l'expert sur les zones impactantes. Cette approche semi-automatique guidée présente des avantages certains, car elle résout des erreurs de structuration qui échappent aux méthodes automatiques. Ces corrections étant ensuite largement diffusées dans toute la structure, l'améliorant globalement.Nous espérons que notre travail apportera quelques perspectives sur la structuration automatique de contenus iconographiques patrimoniaux par des approches basées sur le contenu, tout en ouvrant la porte à davantage de recherches sur la structuration semi-automatique guidée de collections d'images
This thesis explores automatic and semi-automatic structuring approaches for iconographic heritage contents collections. Indeed, exploiting such contents could prove beneficial for numerous applications. From virtual tourism to increased access for both researchers and the general public, structuring the collections would increase their accessibility and their use. However, the inherent "in silo" organization of those collections, each with their unique organization system hinders automatic structuring approaches and all subsequent applications. The computer vision community has proposed numerous automatic methods for indexing (and structuring) image collections at large scale. Exploiting the visual aspect of the contents, they are not impacted by the differences in metadata structures that mainly organize heritage collections, thus appearing as a potential solution to the problem of linking together unique data structures. However, those methods are trained on large, recent datasets, that do not reflect the visual diversity of iconographic heritage contents. This thesis aims at evaluating and exploiting those automatic methods for iconographic heritage contents structuring.To this end, this thesis proposes three distinct contributions with the common goal of ensuring a certain level of interpretability for the methods that are both evaluated and proposed. This interpretability is necessary to justify their efficiency to deal with such complex data but also to understand how to adapt them to new and different content. The first contribution of this thesis is an evaluation of existing state-of-the-art automatic content-based image retrieval (CBIR) approaches when faced with the different types of data composing iconographic heritage. This evaluation focuses first on image descriptors paramount for the image retrieval step and second, on re-ranking methods that re-order similar images after a first retrieval step based on another criterion. The most relevant approaches can then be selected for further use while the non-relevant ones provide insights for our second contribution. The second contribution consists of three novel re-ranking methods exploiting a more or less global spatial information to re-evaluate the relevance of visual similarity links created by the CBIR step. The first one exploits the first retrieved images to create an approximate 3D scene of the scene in which retrieved images are positioned to evaluate their coherence in the scene. The second one simplifies the first while extending the classical geometric verification setting by performing geometric query expansion, that is aggregating 2D geometric information from retrieved images to encode more largely the scene's geometry without the costly step of 3D scene creation. Finally, the third one exploits a more global location information, at dataset-level, to estimate the coherence of the visual similarity between images with regard to their spatial proximity. The third and final contribution is a framework for semi-automatic visual validation and manual correction of a collection's structuring. This framework exploits on one side the most suited automatic approaches evaluated or proposed earlier, and on the other side a graph-based visualization platform. We exploit several visual clues to focus the expert's manual intervention on impacting areas. We show that this guided semi-automatic approach has merits in terms of performance as it solves mistakes in the structuring that automatic methods can not, these corrections being then largely diffused throughout the structure, improving it even more globally.We hope our work will provide some first insights on automatically structuring heritage iconographic content with content-based approaches but also encourage further research on guided semi-automatic structuring of image collections
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Cheon, Saehoon. "Experimental Frame Structuring For Automated Model Construction: Application to Simulated Weather Generation". Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/195473.

Texto completo
Resumen
The source system is the real or virtual environment that we are interested in modeling. It is viewed as a source of observable data, in the form of time-indexed trajectories of variables. The data that has been gathered from observing or experimenting with a system is called the system behavior data base. The time indexed trajectories of variables provide an important clue to compose the DEVS (discrete event specification) model. Once event set is derived from the time indexed trajectories of variable, the DEVS model formalism can be extracted from the given event set. The process must not be a simple model generation but a meaningful model structuring of a request. The source data and query designed with SES are converted to XML Meta data by XML converting process. The SES serves as a compact representation for organizing all possible hierarchical composition of system so that it performs an important role to design the structural representation of query and source data to be saved. For the real data application, the model structuring with the US Climate Normals is introduced. Moreover, complex systems are able to be developed at different levels of resolution. When the huge size of source data in US Climate Normals are implemented for the DEVS model, the model complexity is unavoidable. This issue is dealt with the creation of the equivalent lumped model based on the concept of morphism. Two methods to define the resolution level are discussed, fixed and dynamic definition. Aggregation is also discussed as the one of approaches for the model abstraction. Finally, this paper will introduce the process to integrate the DEVSML(DEVS Modeling Language) engine with the DEVS model creation engine for the Web Service Oriented Architecture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Hiot, Nicolas. "Construction automatique de bases de données pour le domaine médical : Intégration de texte et maintien de la cohérence". Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1026.

Texto completo
Resumen
La construction automatique de bases de données dans le domaine médical représente un défi majeur pour garantir une gestion efficace de l'information et faciliter les prises de décision. Ce projet de recherche se concentre sur l'utilisation des bases de données graphes, une approche qui offre une représentation dynamique et une interrogation efficace des données et en particulier de leur topologie. Notre projet explore la convergence entre les bases de données et le traitement automatique du langage, avec deux objectifs centraux. Tout d'abord, notre attention se porte sur le maintien de la cohérence au sein des bases de données graphes lors des mises à jour, en particulier avec des données incomplètes et des règles métiers spécifiques. Maintenir la cohérence lors des mises à jour permet de garantir un niveau de qualité de données uniforme pour tous les utilisateurs et de faciliter l'analyse. Dans un monde en constante évolution, nous donnons la priorité aux mises à jour, qui peuvent impliquer des modifications de l'instance pour accueillir de nouvelles informations. Mais comment gérer efficacement ces mises à jour successives au sein d'un système de gestion de base de données graphes ? Dans un second temps, nous nous concentrons sur l'intégration des informations extraites de documents textuels, une source de données majeure dans le domaine médical. En particulier, nous examinons les cas cliniques et de pharmacovigilance, un domaine crucial pour identifier les risques et les effets indésirables associés à l'utilisation des médicaments. Comment détecter l'information dans les textes ? Comment intégrer ces données non structurées de manière efficace dans une base de données graphe ? Comment les structurer automatiquement ? Et enfin, qu'est-ce qu'une structure valide dans ce contexte ? On s'intéresse en particulier à favoriser la recherche reproductible en adoptant une démarche transparente et documentée pour permettre la vérification et la validation indépendante de nos résultats
The automatic construction of databases in the medical field represents a major challenge for guaranteeing efficient information management and facilitating decision-making. This research project focuses on the use of graph databases, an approach that offers dynamic representation and efficient querying of data and its topology. Our project explores the convergence between databases and automatic language processing, with two central objectives. In one hand, our focus is on maintaining consistency within graph databases during updates, particularly with incomplete data and specific business rules. Maintaining consistency during updates ensures a uniform level of data quality for all users and facilitates analysis. In a world of constant change, we give priority to updates, which may involve modifying the instance to accommodate new information. But how can we effectively manage these successive updates within a graph database management system? In a second hand, we focus on the integration of information extracted from text documents, a major source of data in the medical field. In particular, we are looking at clinical cases and pharmacovigilance, a crucial area for identifying the risks and adverse effects associated with the use of drugs. But, how can we detect information in texts? How can this unstructured data be efficiently integrated into a graph database? How can it be structured automatically? And finally, what is a valid structure in this context? We are particularly interested in encouraging reproducible research by adopting a transparent and documented approach to enable independent verification and validation of our results
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Automatic data structuring"

1

Mierswa, Ingo, Katharina Morik y Michael Wurst. "Handling Local Patterns in Collaborative Structuring". En Successes and New Directions in Data Mining, 167–86. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-645-7.ch008.

Texto completo
Resumen
Media collections in the internet have become a commercial success and the structuring of large media collections has thus become an issue. Personal media collections are locally structured in very different ways by different users. The level of detail, the chosen categories, and the extensions can differ completely from user to user. Can machine learning be of help also for structuring personal collections? Since users do not want to have their hand-made structures overwritten, one could deny the benefit of automatic structuring. We argue that what seems to exclude machine learning, actually poses a new learning task. We propose a notation which allows us to describe machine learning tasks in a uniform manner. Keeping the demands of structuring private collections in mind, we define the new learning task of localized alternative cluster ensembles. An algorithm solving the new task is presented together with its application to distributed media management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Koshman, Varvara, Anastasia Funkner y Sergey Kovalchuk. "An Unsupervised Approach to Structuring and Analyzing Repetitive Semantic Structures in Free Text of Electronic Medical Records". En pHealth 2021. IOS Press, 2021. http://dx.doi.org/10.3233/shti210579.

Texto completo
Resumen
Electronic Medical Records (EMR) contain a lot of valuable data about patients, which is however unstructured. There is a lack of labeled medical text data in Russian and there are no tools for automatic annotation. We present an unsupervised approach to medical data annotation. Morphological and syntactical analyses of initial sentences produce syntactic trees, from which similar subtrees are then grouped by Word2Vec and labeled using dictionaries and Wikidata categories. This method can be used to automatically label EMRs in Russian and proposed methodology can be applied to other languages, which lack resources for automatic labeling and domain vocabularies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Blanchard, Emmanuel G., Riichiro Mizoguchi y Susanne P. Lajoie. "Structuring the Cultural Domain with an Upper Ontology of Culture". En Handbook of Research on Culturally-Aware Information Technology, 179–212. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-61520-883-8.ch009.

Texto completo
Resumen
Study of cultural similarities and differences is an important research topic for many disciplines such as psychology, sociology, anthropology, archaeology, museology, communication, management and business. This presents many potential opportunities for Information Technology specialists to develop culturally-aware technology, but it also raises the risk of inconsistent approaches of the cultural domain. In this chapter, the authors present the fundamental concepts of the Upper Ontology of Culture (UOC), a formal conceptualization of the cultural domain they developed by identifying the common backbone of culture-related disciplines and activities. As a neutral, theory-driven, and interdisciplinary conceptualization, the UOC shall provide guidelines for the development of culturally-aware applications, for the consistent computerization of cultural data and their interoperability, as well as for the development of culture-driven automatic reasoning processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

"Chapter 1. Graphematical analysis". En LINGUISTIC ANALYZER: AUTOMATIC TRANSFORMATION OF NATURAL LANGUAGE TEXTS INTO INFORMATION DATA STRUCTURE, 16–26. St. Petersburg State University, 2019. http://dx.doi.org/10.21638/11701/9785288059278.02.

Texto completo
Resumen
Graphematical analysis marks the first stage of text processing. However, prior to it, basic text structuring takes place, resulting in the identification of paragraphs and their types, e.g. title, subtitle, author name(s), chapter and section titles, footnotes, endnotes, figures, appendices, epigraphs, etc. After that, graphematical analysis proper begins. Its aim is to decompose the flow of letter and non-letter graphemes into character strings such as individual words, abbreviations, numbers, and hybrid strings (e.g. mathematical formulae). The procedure implies an iterative process of unit assembling, from individual characters to what is called atoms, next to tokens (roughly equivalent to word occurrences), sentence parts and finally, a whole sentence. At every stage, each unit is assigned its type. Assembling relies on the rules based solely on a thorough structural analysis of context. No formal models or statistical methods are applied, this being a central principle of the linguistic analyzer, inherent in all its algorithms. At this stage, complications arise primarily through the ambiguity of punctuation marks. They are discussed at length throughout the chapter.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Madhura K, Dr. "APPLICATION OF AI AND BLOCKCHAIN TECHNOLOGY IN DCMS FOR THE AUTOMATIC DOCUMENT CLASSIFICATION AND IMPROVE THE SECURITY". En Futuristic Trends in Computing Technologies and Data Sciences Volume 3 Book 3, 63–82. Iterative International Publisher, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/v3bfct3p3ch5.

Texto completo
Resumen
Every day, the volume of information carried back and forth across the internet grows at a rate of around 2.3 quintillion bytes. Accelerated technical advancement, increased user interaction, and very efficient application development are just a few of the various advantages that may be realized by increasing speed. This methodology uses artificial intelligence to arrange unstructured yet sensitive data. This prevents unauthorized third parties from accessing the information and ensures that data flows correctly so that the programmer may continue to function effectively. To accomplish this, cutting-edge access control technology, encryption, and block chain technology are used to protect sensitive data. An automatic classifier that will assist in the classification of both secret and non-confidential data has been implemented. To prevent unauthorized access to such sensitive documents, a solution that does not involve human interaction is required. This technique handles all parts of data processing, such as recognizing the type of data, assessing whether it is critical, and determining the next encryption step to safeguard the crucial data. This technology has the potential to be applied in a wide range of sectors, including the security of sensitive data in military and medical research, the protection of confidential data in enterprises, and other areas of interest. Analysis is used to extract content from photos, and a trained classifier is then used to determine whether or not the input data is secret. An automated approach known as machine learning is used to train the classifier. To prevent unauthorized access to critical information, papers placed in the institution's principal storage area are first encrypted using an RSA algorithm before being stored. This is done on IPFS cloud to avoid storing a large amount of data on the block chain, which will incur a large cost, because every single storage on the block chain requires a cryptocurrency transaction. In this work, Ethereum is used to transact the data and store on the smart contracts, and the data is stored on IPFS cloud. This is done on the IPFS cloud to avoid keeping a significant amount of data on the block chain, which would incur a large cost. Third when block chain is integrated with other parts of block chain, a high-end access control system can be built in which authenticated users are traced at every step and held accountable for their actions. Because of the automated end-to-end data structuring, processing, securing, and storing system that was built on high-end internal access control and tracking in order to have a secure eco-system with authorized users and administrators, the company will be able to maintain its competitive advantage.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hai-Jew, Shalin. "Structuring and Facilitating Online Learning through Learning/Course Management Systems". En Data Mining, 1358–75. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2455-9.ch070.

Texto completo
Resumen
Online learning—whether it is human-facilitated or automated, hybrid/blended, asynchronopus or synchronous or mixed--often relies on learning/course management systems (L/CMSes). These systems have evolved in the past decade-and-a-half of popular use to integrate powerful tools, third-party software, Web 2.0 functionalities (blogs, wikis, virtual worlds, and tag clouds), and a growing set of capabilities (eportfolios, data management, back-end data mining, information assurance, and other elements). This chapter highlights learning/course management systems, their functionalities and structures (including some integrated technologies), their applied uses in adult e-learning, and extra-curricular applications. A concluding section explores future L/CMSes based on current trends.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Brilhante, Virginia y Dave Robertson. "Metadata-Supported Automated Ecological Modelling". En Environmental Information Systems in Industry and Public Administration, 313–32. IGI Global, 2001. http://dx.doi.org/10.4018/978-1-930708-02-0.ch021.

Texto completo
Resumen
Ecological models should be rooted in data derived from observation, allowing methodical model construction and clear accounts of model results with respect to the data. Unfortunately, many models are retrospectively fitted to data because in practice it is difficult to bridge the gap between concrete data and abstract models. Our research is on automated methods to support bridging this gap. The approach proposed consists of raising the data level of abstraction via an ecological metadata ontology and from that, through logic-based knowledge representation and inference, to automatically generate prototypical partial models to be further improved by the modeler. In this chapter we aim to: 1) give an overview of current automated modelling approaches applied to ecology, and relate them to our metadata-based approach under investigation; and 2) explain and demonstrate how it is realized using logic-based formalisms. We give the overview of current automated modelling approaches in the section “Ecological Modeling and Automation: Current Approaches,” focusing on compositional modelling and model induction. The contrast between these and our approach, where we adopt metadata descriptions through an ontology and logic-based modelling, is discussed in the section “Our Automated Ecological Modelling Avenue.” The next section, “Towards a System for Metadata–Supported Automated Modeling,” makes ideas more concrete, starting with further details on the Ecolingua ontology, followed by examples of automated model structuring and parameter estimation. In the concluding section, “A Look Ahead and Conclusion,” we comment briefly on the ontologies trend and on the outlook of our research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mazein, Ilya, Tom Gebhardt, Felix Zinkewitz, Lea Michaelis, Sarah Braun, Dagmar Waltemath, Ron Henkel y Judith A. H. Wodke. "MeDaX: A Knowledge Graph on FHIR". En Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti240423.

Texto completo
Resumen
In Germany, the standard format for exchange of clinical care data for research is HL7 FHIR. Graph databases (GDBs), well suited for integrating complex and heterogeneous data from diverse sources, are currently gaining traction in the medical field. They provide a versatile framework for data analysis which is generally challenging for raw FHIR-formatted data. For generation of a knowledge graph (KG) for clinical research data, we tested different extract-transform-load (ETL) approaches to convert FHIR into graph format. We designed a generalised ETL process and implemented a prototypic pipeline for automated KG creation and ontological structuring. The MeDaX-KG prototype is built from synthetic patient data and currently serves internal testing purposes. The presented approach is easy to customise to expand to other data types and formats.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Nehrey, Maryna y Taras Hnot. "Data Science Tools Application for Business Processes Modelling in Aviation". En Advances in Computer and Electrical Engineering, 176–90. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7588-7.ch006.

Texto completo
Resumen
Successful business involves making decisions under uncertainty using a lot of information. Modern modeling approaches based on data science algorithms are a necessity for the effective management of business processes in aviation. Data science involves principles, processes, and techniques for understanding business processes through the analysis of data. The main goal of this chapter is to improve decision making using data science algorithms. There are sets of frequently used algorithms described in the chapter: linear, logistic regression models, decision trees as a classical example of supervised learning, and k-means and hierarchical clustering as unsupervised learning. Application of data science algorithms gives an opportunity for deep analyses and understanding of business processes in aviation, gives structuring of problems, provides systematization of business processes. Business processes modeling, based on the data science algorithms, enables us to substantiate solutions and even automate the processes of business decision making.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Nehrey, Maryna y Taras Hnot. "Data Science Tools Application for Business Processes Modelling in Aviation". En Research Anthology on Reliability and Safety in Aviation Systems, Spacecraft, and Air Transport, 617–31. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5357-2.ch024.

Texto completo
Resumen
Successful business involves making decisions under uncertainty using a lot of information. Modern modeling approaches based on data science algorithms are a necessity for the effective management of business processes in aviation. Data science involves principles, processes, and techniques for understanding business processes through the analysis of data. The main goal of this chapter is to improve decision making using data science algorithms. There are sets of frequently used algorithms described in the chapter: linear, logistic regression models, decision trees as a classical example of supervised learning, and k-means and hierarchical clustering as unsupervised learning. Application of data science algorithms gives an opportunity for deep analyses and understanding of business processes in aviation, gives structuring of problems, provides systematization of business processes. Business processes modeling, based on the data science algorithms, enables us to substantiate solutions and even automate the processes of business decision making.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Automatic data structuring"

1

Pathak, Shreyasi, Jorit van Rossen, Onno Vijlbrief, Jeroen Geerdink, Christin Seifert y Maurice van Keulen. "Automatic Structuring of Breast Cancer Radiology Reports for Quality Assurance". En 2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2018. http://dx.doi.org/10.1109/icdmw.2018.00111.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Jasem, P., S. Dolinska, J. Paralic y M. Dudas. "Automatic Data Mining and Structuring for Research on Birth Defects". En 2008 6th International Symposium on Applied Machine Intelligence and Informatics (SAMI '08). IEEE, 2008. http://dx.doi.org/10.1109/sami.2008.4469151.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sailer, Anca, Xing Wei y Ruchi Mahindru. "Enhanced Maintenance Services with Automatic Structuring of IT Problem Ticket Data". En 2008 IEEE International Conference on Services Computing (SCC). IEEE, 2008. http://dx.doi.org/10.1109/scc.2008.70.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Wei, Xing, Anca Sailer, Ruchi Mahindru y Gautam Kar. "Automatic Structuring of IT Problem Ticket Data for Enhanced Problem Resolution". En 2007 10th IFIP/IEEE International Symposium on Integrated Network Management. IEEE, 2007. http://dx.doi.org/10.1109/inm.2007.374727.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kano Glückstad, Fumiko. "Application of an Automatic Data Alignment & Structuring System for Intercultural Consumer Segmentation Analysis". En 7th International Conference on Knowledge Engineering and Ontology Development. SCITEPRESS - Science and and Technology Publications, 2015. http://dx.doi.org/10.5220/0005605602510256.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Yanhao, Fanyi Wang, Weixuan Sun, Jingwen Su, Peng Liu, Yaqian Li, Xinjie Feng y Zhengxia Zou. "Matting Moments: A Unified Data-Driven Matting Engine for Mobile AIGC in Photo Gallery". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/845.

Texto completo
Resumen
Image matting is a fundamental technique in visual understanding and has become one of the most significant capabilities in mobile phones. Despite the development of mobile storage and computing power, achieving diverse mobile Artificial Intelligence Generated Content (AIGC) applications remains a great challenge. To address this issue, we present an innovative demonstration of an automatic system called "Matting Moments" that enables automatic image editing based on matting models in different scenarios. Coupled with accurate and refined matting subjects, our system provides visual element editing abilities and backend services for distribution and recommendation that respond to emotional expressions. Our system comprises three components: 1) photo content structuring, 2) data-driven matting engine, and 3) AIGC functions for generation, which automatically achieve diverse photo beautification in the gallery. This system offers a unified framework that guides consumers to obtain intelligent recommendations with beautifully generated contents, helping them enjoy the moments and memories of their present life.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Mendoza, Isela, Fernando Silva Filho, Gustavo Medeiros, Aline Paes y Vânia O. Neves. "Comparative Analysis of Large Language Model Tools for Automated Test Data Generation from BDD". En Simpósio Brasileiro de Engenharia de Software, 280–90. Sociedade Brasileira de Computação, 2024. http://dx.doi.org/10.5753/sbes.2024.3423.

Texto completo
Resumen
Automating processes reduces human workload, particularly in software testing, where automation enhances quality and efficiency. Behavior-driven development (BDD) focuses on software behavior to define and validate required functionalities, using tools to translate functional requirements into automated tests. However, creating BDD scenarios and associated test data inputs is timeconsuming and heavily reliant on a good input data set. Large Language Models (LLMs) such as Microsoft’s Copilot, OpenAI’s ChatGPT-3.5, ChatGPT-4, and Google’s Gemini offer potential solutions by automating test data generation. This study evaluates these LLMs’ ability to understand BDD scenarios and generate corresponding test data across five scenarios ranked by complexity. It assesses the LLMs’ learning, assertiveness, response structuring, quality, representativeness, and coverage of the generated test data. The results indicate that ChatGPT-4 and Gemini stand out as the best tools that met our expectations, showing promise for advancing the automation of test data generation from BDD scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kacerik, Martin y Jiri Bittner. "On Importance of Scene Structure for Hardware-Accelerated Ray Tracing". En WSCG 2023 – 31. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. University of West Bohemia, Czech Republic, 2023. http://dx.doi.org/10.24132/csrn.3301.60.

Texto completo
Resumen
Ray tracing is typically accelerated by organizing the scene geometry into an acceleration data structure. Hardware-accelerated ray tracing, available through modern graphics APIs, exposes an interface to the acceleration structure (AS) builder that constructs it given the input scene geometry. However, this process is opaque, with limited knowledge and control over the internal algorithm. Additional control is available through the layout of the AS builder input data, the geometry of the scene structured in a user-defined way. In this work, we evaluate the impact of a different scene structuring on the run time performance of the ray-triangle intersections in the context of hardware-accelerated ray tracing. We discuss the possible causes of significantly different outcomes (up to 1.4 times) for the same scene and identify a potential to reduce the cost by automatic input structure optimization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Pakhaev, Khusein. "Methods And Technologies Of Automation And Data Structuring In Agriculture". En International Conference "Modern trends in governance and sustainable development of socio-economic systems: from regional development to global economic growth", 627–34. European Publisher, 2024. http://dx.doi.org/10.15405/epms.2024.09.70.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Homlong, Eirik G., Rahul P. Kumar, Ole Jakob Elle y Ola Wiig. "Automated structuring of gait data for analysis purposes - A deep learning pilot example". En 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2023. http://dx.doi.org/10.1109/embc40787.2023.10340938.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía