Siga este link para ver outros tipos de publicações sobre o tema: Automatic data structuring.

Artigos de revistas sobre o tema "Automatic data structuring"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Automatic data structuring".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Yermukhanbetova, Sharbanu, e Gulnara Bektemyssova. "AUTOMATIC MERGING AND STRUCTURING OF DATA FROM DIFFERENT CATALOGS". JP Journal of Heat and Mass Transfer, Special (4 de junho de 2020): 7–11. http://dx.doi.org/10.17654/hmsi120007.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Xiong, Wei, Chung-Mong Lee e Rui-Hua Ma. "Automatic video data structuring through shot partitioning and key-frame computing". Machine Vision and Applications 10, n.º 2 (1 de junho de 1997): 51–65. http://dx.doi.org/10.1007/s001380050059.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Pryhodinuk, V. V., Yu A. Tymchenko, M. V. Nadutenko e A. Yu Gordieiev. "Automated data processing for evaluation the hydrophysical state of the Black Sea water areas". Oceanographic Journal (Problems, methods and facilities for researches of the World Ocean), n.º 2(13) (22 de abril de 2020): 114–29. http://dx.doi.org/10.37629/2709-3972.2(13).2020.114-129.

Texto completo da fonte
Resumo:
The article explores the issues of collecting, structuring and displaying oceanographic information from spatially distributed sources. The aim of the work was to develop services for an intelligent information system (IIS) designed to assess the hydrophysical state of the Black Sea waters by creating a library of ontological descriptions of the processing and displaying information in the IIS software environment. Article describes approaches to the creation of an automated data processing system for the assessment of the hydrophysical state of the Black Sea using the method of recursive reduction. The information about the main functions of IIS for displaying structured data for illuminating the hydrophysical situation is presented. To solve such a problem, a set of cognitive services built on the basis of cognitive IT platforms to ensure the processes of automatic and automated collection of oceanographic data, their structuring and presentation to the user in an interactive form was applied for the first time. The results of the work can be used during the development of an analytical system for the automation of scientific and applied problems associated with the use of operational oceanographic data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yu, Haiyang, Shuai Yang, Zhihai Wu e Xiaolei Ma. "Vehicle trajectory reconstruction from automatic license plate reader data". International Journal of Distributed Sensor Networks 14, n.º 2 (fevereiro de 2018): 155014771875563. http://dx.doi.org/10.1177/1550147718755637.

Texto completo da fonte
Resumo:
Using perception data to excavate vehicle travel information has been a popular area of study. In order to learn the vehicle travel characteristics in the city of Ruian, we developed a common methodology for structuring travelers’ complete information using the travel time threshold to recognize a single trip based on the automatic license plate reader data and built a trajectory reconstruction model integrated into the technique for order preference by similarity to an ideal solution and depth-first search to manage the vehicles’ incomplete records phenomenon. In order to increase the practicability of the model, we introduced two speed indicators associated with actual data and verified the model’s reliability through experiments. Our results show that the method would be affected by the number of missing records. The model and results of this work will allow us to further study vehicles’ commuting characteristics and explore hot trajectories.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Dovgal, Sofiia, Egor Mukhaev, Marat Sabitov e Lyubov' Adamcevich. "Development of a web service for processing data from electronic images of urban plans of land plots". Construction and Architecture 11, n.º 1 (24 de março de 2023): 17. http://dx.doi.org/10.29039/2308-0191-2022-11-1-17-17.

Texto completo da fonte
Resumo:
The article gives an idea of the content of urban planning plans for land plots (UPPLP), their purpose, as well as the relevance of developing a service for automatic recognition of data from an electronic image of a document. The existing services for automatic processing of documents are analyzed, and a technical solution developed by the authors is presented in the form of a web service for parsing and structuring electronic images of UPPLP. The description of the structure and operation of the web service, as well as the data conversion algorithm implemented in the solution is given.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kopyrin, Andrey Sergeevich, e Irina Leonidovna Makarova. "Algorithm for preprocessing and unification of time series based on machine learning for data structuring". Программные системы и вычислительные методы, n.º 3 (março de 2020): 40–50. http://dx.doi.org/10.7256/2454-0714.2020.3.33958.

Texto completo da fonte
Resumo:
The subject of the research is the process of collecting and preliminary preparation of data from heterogeneous sources. Economic information is heterogeneous and semi-structured or unstructured in nature. Due to the heterogeneity of the primary documents, as well as the human factor, the initial statistical data may contain a large amount of noise, as well as records, the automatic processing of which may be very difficult. This makes preprocessing dynamic input data an important precondition for discovering meaningful patterns and domain knowledge, and making the research topic relevant.Data preprocessing is a series of unique tasks that have led to the emergence of various algorithms and heuristic methods for solving preprocessing tasks such as merge and cleanup, identification of variablesIn this work, a preprocessing algorithm is formulated that allows you to bring together into a single database and structure information on time series from different sources. The key modification of the preprocessing method proposed by the authors is the technology of automated data integration.The technology proposed by the authors involves the combined use of methods for constructing a fuzzy time series and machine lexical comparison on the thesaurus network, as well as the use of a universal database built using the MIVAR concept.The preprocessing algorithm forms a single data model with the ability to transform the periodicity and semantics of the data set and integrate data that can come from various sources into a single information bank.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Willot, L., D. Vodislav, L. De Luca e V. Gouet-Brunet. "AUTOMATIC STRUCTURING OF PHOTOGRAPHIC COLLECTIONS FOR SPATIO-TEMPORAL MONITORING OF RESTORATION SITES: PROBLEM STATEMENT AND CHALLENGES". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-2/W1-2022 (25 de fevereiro de 2022): 521–28. http://dx.doi.org/10.5194/isprs-archives-xlvi-2-w1-2022-521-2022.

Texto completo da fonte
Resumo:
Abstract. Over the last decade, a large number of digital documentation projects have demonstrated the potential of image-based modelling of heritage objects in the context of documentation, conservation, and restoration. The inclusion of these emerging methods in the daily monitoring of the activities of a heritage restoration site (context in which hundreds of photographs per day can be acquired by multiple actors, in accordance with several observation and analysis needs) raises new questions at the intersection of big data management, analysis, semantic enrichment, and more generally automatic structuring of this data. In this article we propose a data model developed around these questions and identify the main challenges to overcome the problem of structuring massive collections of photographs through a review of the available literature on similarity metrics used to organise the pictures based on their content or metadata. This work is realized in the context of the restoration site of the Notre-Dame de Paris cathedral that will be used as the main case study.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Galauskis, Maris, e Arturs Ardavs. "The Process of Data Validation and Formatting for an Event-Based Vision Dataset in Agricultural Environments". Applied Computer Systems 26, n.º 2 (1 de dezembro de 2021): 173–77. http://dx.doi.org/10.2478/acss-2021-0021.

Texto completo da fonte
Resumo:
Abstract In this paper, we describe our team’s data processing practice for an event-based camera dataset. In addition to the event-based camera data, the Agri-EBV dataset contains data from LIDAR, RGB, depth cameras, temperature, moisture, and atmospheric pressure sensors. We describe data transfer from a platform, automatic and manual validation of data quality, conversions to multiple formats, and structuring of the final data. Accurate time offset estimation between sensors achieved in the dataset uses IMU data generated by purposeful movements of the sensor platform. Therefore, we also outline partitioning of the data and time alignment calculation during post-processing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Kang, Tian, Shaodian Zhang, Youlan Tang, Gregory W. Hruby, Alexander Rusanov, Noémie Elhadad e Chunhua Weng. "EliIE: An open-source information extraction system for clinical trial eligibility criteria". Journal of the American Medical Informatics Association 24, n.º 6 (1 de abril de 2017): 1062–71. http://dx.doi.org/10.1093/jamia/ocx019.

Texto completo da fonte
Resumo:
Abstract Objective To develop an open-source information extraction system called Eligibility Criteria Information Extraction (EliIE) for parsing and formalizing free-text clinical research eligibility criteria (EC) following Observational Medical Outcomes Partnership Common Data Model (OMOP CDM) version 5.0. Materials and Methods EliIE parses EC in 4 steps: (1) clinical entity and attribute recognition, (2) negation detection, (3) relation extraction, and (4) concept normalization and output structuring. Informaticians and domain experts were recruited to design an annotation guideline and generate a training corpus of annotated EC for 230 Alzheimer’s clinical trials, which were represented as queries against the OMOP CDM and included 8008 entities, 3550 attributes, and 3529 relations. A sequence labeling–based method was developed for automatic entity and attribute recognition. Negation detection was supported by NegEx and a set of predefined rules. Relation extraction was achieved by a support vector machine classifier. We further performed terminology-based concept normalization and output structuring. Results In task-specific evaluations, the best F1 score for entity recognition was 0.79, and for relation extraction was 0.89. The accuracy of negation detection was 0.94. The overall accuracy for query formalization was 0.71 in an end-to-end evaluation. Conclusions This study presents EliIE, an OMOP CDM–based information extraction system for automatic structuring and formalization of free-text EC. According to our evaluation, machine learning-based EliIE outperforms existing systems and shows promise to improve.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Koshman, Varvara, Anastasia Funkner e Sergey Kovalchuk. "An Unsupervised Approach to Structuring and Analyzing Repetitive Semantic Structures in Free Text of Electronic Medical Records". Journal of Personalized Medicine 12, n.º 1 (1 de janeiro de 2022): 25. http://dx.doi.org/10.3390/jpm12010025.

Texto completo da fonte
Resumo:
Electronic medical records (EMRs) include many valuable data about patients, which is, however, unstructured. Therefore, there is a lack of both labeled medical text data in Russian and tools for automatic annotation. As a result, today, it is hardly feasible for researchers to utilize text data of EMRs in training machine learning models in the biomedical domain. We present an unsupervised approach to medical data annotation. Syntactic trees are produced from initial sentences using morphological and syntactical analyses. In retrieved trees, similar subtrees are grouped using Node2Vec and Word2Vec and labeled using domain vocabularies and Wikidata categories. The usage of Wikidata categories increased the fraction of labeled sentences 5.5 times compared to labeling with domain vocabularies only. We show on a validation dataset that the proposed labeling method generates meaningful labels correctly for 92.7% of groups. Annotation with domain vocabularies and Wikidata categories covered more than 82% of sentences of the corpus, extended with timestamp and event labels 97% of sentences got covered. The obtained method can be used to label EMRs in Russian automatically. Additionally, the proposed methodology can be applied to other languages, which lack resources for automatic labeling and domain vocabulary.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Mentari, Mustika, Yan Watequlis Syaifudin, Nobuo Funabiki, Nadia Layra Aziza e Tita Wijayanti. "Canny and Morphological Approaches to Calculating Area and Perimeter of Two-Dimensional Geometry". Jurnal Jartel Jurnal Jaringan Telekomunikasi 12, n.º 4 (30 de dezembro de 2022): 287–96. http://dx.doi.org/10.33795/jartel.v12i4.574.

Texto completo da fonte
Resumo:
Calculating area and perimeter in real-world conditions has its challenges. The actual conditions include applications in the medical field to measure the presence of tumors or the condition of human organs and applications in geography to measure specific areas on a map; applications in architecture often calculate the area and perimeter of buildings, interior design, exterior design, and other uses. Technology can make it easier with automatic calculations. Mathematical methods and computer vision techniques are required to create automated systems. The Canny method is usually used, which is good enough for detecting edges but not sufficient for measuring irregular geometric shapes. This paper aims to calculate the area and perimeter of a geometric shape using the Canny method and geometry. Data samples in various forms are used in this study. Calculating area and perimeter using the Canny method involves obtaining the length (X,Y) of the RGB image converted to HSV. Edge detection values are used to calculate the area and perimeter of objects. The morphological method uses binary image input as input data. Then proceed to the convolution process with structuring and calculating the area and circumference of objects. Based on the research results, calculating the area and circumference of objects is more effective using morphological methods. However, the level of accuracy is affected by the selection of structuring elements (strels) which must be optimal and global.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Lee, Yong-Ki, Jineon Kim, Chae-Soon Choi e Jae-Joon Song. "Semi-automatic calculation of joint trace length from digital images based on deep learning and data structuring techniques". International Journal of Rock Mechanics and Mining Sciences 149 (janeiro de 2022): 104981. http://dx.doi.org/10.1016/j.ijrmms.2021.104981.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Plisenko, Olga. "Algorithms and methodology for inflection line identification within information and mathematical model of relief". InterCarto. InterGIS 28, n.º 1 (2022): 683–95. http://dx.doi.org/10.35595/2414-9179-2022-1-28-683-695.

Texto completo da fonte
Resumo:
The relevance of this research topic lies in the fact that a universal technology for automated recognition and identification of structural relief elements is currently not available. This task is one of the main analytical directions of geomorphological mapping, and its solution will reduce the time for its development, unify the results, and expand the field of application of the homomorphic and genetically homogeneous elementary surface model in interdisciplinary research. To solve this problem, an information and mathematical relief model is developed, the purpose of which is to present surface relief in the form of a consistent set of all structural elements, simulate the obtained surface in 3D space, provide a complete automated cycle of highlighting and classifying structural relief elements, and present various algorithms for its analysis. The described work stage includes the development of original algorithms and methods for automatic identification of slope inflection lines as part of an information and mathematical model. Slope inflection lines are structuring in material-energy flow redistribution between and within genetically homogeneous surfaces. Automation of the selection of inflection lines is the penultimate stage of constructing the target terrain models. In the study, we discuss the main stages of the automated technology supplying the initial data for the developed algorithms, give an overview of the existing methods and software products used to determine the slope inflection lines, describe the mathematical and algorithmic techniques used in the developed algorithms, and discuss the peculiarities of using these techniques in relation to the developed general technology. The result of the work is an original automatic methodology for determining slope inflection lines, which allows us to proceed to automatic identification and classification of elementary surfaces.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Benício, Diego Henrique Pegado, João Carlos Xavier Junior, Kairon Ramon Sabino de Paiva e Juliana Dantas de Araújo Santos Camargo. "Applying Text Mining and Natural Language Processing to Electronic Medical Records for extracting and transforming texts into structured data". Research, Society and Development 11, n.º 6 (30 de abril de 2022): e37711629184. http://dx.doi.org/10.33448/rsd-v11i6.29184.

Texto completo da fonte
Resumo:
The recording of patients' data in electronic patient records (EPRs) by healthcare providers is usually performed in free text fields, allowing different ways of describing that type of information (e.g., abbreviation, terminology, etc.). In scenarios like that, retrieving data from such source (text) by using SQL (Structured Query Language) queries becomes an unfeasible issue. Based on this fact, we present in this paper a tool for extracting comprehensible and standardized patients' data from unstructured data which applies Text Mining and Natural Language Processing techniques. Our main goal is to carry out an automatic process of extracting, clearing and structuring data obtained from EPRs belonging to pregnant patients from the Januario Cicco maternity hospital located in Natal - Brazil. 3,000 EPRs written in Portuguese from 2016 e 2020 were used in our comparison analysis between data manually retrieved by health professionals (e.g., doctors and nurses) and data retrieved by our tool. Moreover, we applied the Kruskal-Wallis statistical test in order to statically evaluate the obtained results between manual and automatic processes. Finally, the statistical results have showed that there was no statistical difference between the retrieval processes. In this sense, the final results were considerably promising.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Sidi, Fatimah, Iskandar Ishak e Marzanah A. Jabar. "MalayIK: An Ontological Approach to Knowledge Transformation in Malay Unstructured Documents". International Journal of Electrical and Computer Engineering (IJECE) 8, n.º 1 (1 de fevereiro de 2018): 1. http://dx.doi.org/10.11591/ijece.v8i1.pp1-10.

Texto completo da fonte
Resumo:
The number of unstructured documents written in Malay language is enormously available on the web and intranets. However, unstructured documents cannot be queried in simple ways, hence the knowledge contained in such documents can neither be used by automatic systems nor could be understood easily and clearly by humans. This paper proposes a new approach to transform extracted knowledge in Malay unstructured document using ontology by identifying, organizing, and structuring the documents into an interrogative structured form. A Malay knowledge base, the MalayIK corpus is developed and used to test the MalayIK-Ontology against Ontos, an existing data extraction engine. The experimental results from MalayIK-Ontology have shown a significant improvement of knowledge extraction over Ontos implementation. This shows that clear knowledge organization and structuring concept is able to increase understanding, which leads to potential increase in sharable and reusable of concepts among the community.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Doria, E., e F. Picchio. "TECHNIQUES FOR MOSAICS DOCUMENTATION THROUGH PHOTOGRAMMETRY DATA ACQUISITION. THE BYZANTINE MOSAICS OF THE NATIVITY CHURCH". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (3 de agosto de 2020): 965–72. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-965-2020.

Texto completo da fonte
Resumo:
Abstract. This paper describes a sequence of actions developed to guarantee a reliable and suitable dataset for the creation of detailed ortho images of Nativity Church mosaics. During acquisition campaigns, different photogrammetric techniques were tested, and different survey instruments were compared to improve the quality of the data obtained. The different outputs allow the adjustment of the instrument parameters and the acquisition methods, to structure a methodological process aimed at obtaining an accurate level of detail to describe the individual mosaic tile.From the realization of a reliable photomosaics an automatic vectorization system has been developed. This process, aimed at digitizing the tiles of the Church walls and the pavement mosaics, responds to a documentation and management purpose and to an objective of structuring a data acquisition method and post-production that can be replicated on other mosaic contexts.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Vadurin, Kyrylo, Andrii Perekrest, Volodymyr Bakharev, Andriy Deriyenko, Artem Ivashchenko e Sergii Shkarupa. "AN INFORMATION SYSTEM FOR COLLECTING AND STORING AIR QUALITY DATA FROM MUNICIPAL LEVEL VAISALA STATIONS". Інфокомунікаційні та комп’ютерні технології, n.º 2(6) (2023): 38–49. http://dx.doi.org/10.36994/2788-5518-2023-02-06-04.

Texto completo da fonte
Resumo:
Today, the problem of air pollution is a pressing one, especially in Ukraine. As a result of the war and the Kakhovka hydroelectric power station disaster, as well as the relocation of certain industrial enterprises and the displacement of the population, there have been significant changes in the state of the atmospheric air in municipalities compared to the state before these changes, which was recorded by environmental monitoring stations and could be predicted for a long time. In Kremenchuk, air quality studies are carried out by a municipal enterprise of the city council. The company's automatic equipment includes several Vaisala stations that support data output via API to a subscription-based service. The basic service can store data from the stations, build graphs and output indicators in CSV format, but cannot be upgraded to generate reports according to internal document management standards and does not support the generation of reports in accordance with CMU Resolution No.827.Therefore, the purpose of this study is to develop and implement an information system for the automatic collection, accumulation, structuring of data from Vaisala stations and automated generation of reports on the state of the atmospheric air, which will reduce the time required for the operator to prepare and provide comprehensive information on possible exceedances of harmful impurities in the air to regulatory authorities based on internal document management standards. First, the study analysed similar web resources that provide data on the state of the air and are publicly available. Then we developed and implemented an information system based on Ubuntu OS, Nginx server, MySQL database management system, PHP language with PHPWord and Laravel frameworks, Vue.js and Bootstrap JavaScript libraries, and vaisala_api, SPA, and AJAX network technologies. The server has been modified to process a large amount of data from Vaisala stations, for which separate tables have been created to upload raw data and a table to convert minute readings into 20-minute, daily, monthly and annual readings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Salmam, Fatima Zahra, Mohamed Fakir e Rahhal Errattahi. "Prediction in OLAP Data Cubes". Journal of Information & Knowledge Management 15, n.º 02 (20 de maio de 2016): 1650022. http://dx.doi.org/10.1142/s0219649216500222.

Texto completo da fonte
Resumo:
Online analytical processing (OLAP) provides tools to explore data cubes in order to extract the interesting information, it refers to techniques used to query, visualise and synthesise the multidimensional data. Nevertheless OLAP is limited on visualisation, structuring and exploring manually the data cubes. On the other side, data mining allows algorithms that offer automatic knowledge extraction, such as classification, explanation and prediction algorithms. However, OLAP is not capable of explaining and predicting events from existing data; therefore, it is possible to make a more efficient online analysis by coupling data mining and OLAP to allow the user to assist in this new task of knowledge extraction. In this paper, we will carry on within works achieved in this theme and we suggest to extend the abilities of OLAP to prediction (enhancing the OLAP abilities and techniques by introducing a predictive model based on a data mining algorithms). The model is calculated on the aggregated data, and prediction is done on detailed missing data. Our approach is based on regression trees and neural networks; it consists to predict facts having a missed measures value in the data cubes. The user will have in his disposition, a new platform called PredCube, that offers the possibility to query, visualise and synthesise the multidimensional data, and also to predict missing values in the data cube using three data mining methods, and evaluate the quality of the prediction by comparing the average error and the execution time given by each one.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Nunes, Carolina, Jasper Anckaert, Fanny De Vloed, Jolien De Wyn, Kaat Durinck, Jo Vandesompele, Frank Speleman e Vanessa Vermeirssen. "HTSplotter: An end-to-end data processing, analysis and visualisation tool for chemical and genetic in vitro perturbation screening". PLOS ONE 19, n.º 1 (5 de janeiro de 2024): e0296322. http://dx.doi.org/10.1371/journal.pone.0296322.

Texto completo da fonte
Resumo:
In biomedical research, high-throughput screening is often applied as it comes with automatization, higher-efficiency, and more and faster results. High-throughput screening experiments encompass drug, drug combination, genetic perturbagen or a combination of genetic and chemical perturbagen screens. These experiments are conducted in real-time assays over time or in an endpoint assay. The data analysis consists of data cleaning and structuring, as well as further data processing and visualisation, which, due to the amount of data, can easily become laborious, time-consuming and error-prone. Therefore, several tools have been developed to aid researchers in this process, but these typically focus on specific experimental set-ups and are unable to process data of several time points and genetic-chemical perturbagen screens. To meet these needs, we developed HTSplotter, a web tool and Python module that performs automatic data analysis and visualization of visualization of eitherendpoint or real-time assays from different high-throughput screening experiments: drug, drug combination, genetic perturbagen and genetic-chemical perturbagen screens. HTSplotter implements an algorithm based on conditional statements to identify experiment types and controls. After appropriate data normalization, including growth rate normalization, HTSplotter executes downstream analyses such as dose-response relationship and drug synergism assessment by the Bliss independence (BI), Zero Interaction Potency (ZIP) and Highest Single Agent (HSA) methods. All results are exported as a text file and plots are saved in a PDF file. The main advantage of HTSplotter over other available tools is the automatic analysis of genetic-chemical perturbagen screens and real-time assays where growth rate and perturbagen effect results are plotted over time. In conclusion, HTSplotter allows for the automatic end-to-end data processing, analysis and visualisation of various high-throughput in vitro cell culture screens, offering major improvements in terms of versatility, efficiency and time over existing tools.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Ploux, Sabine. "Modélisation et Traitement Informatique de la Synonymie". Lingvisticæ Investigationes. International Journal of Linguistics and Language Resources 21, n.º 1 (1 de janeiro de 1997): 1–27. http://dx.doi.org/10.1075/li.21.1.02plo.

Texto completo da fonte
Resumo:
This paper deals with automatic structuring of semantic values in a dictionary of synonyms. The data we used were first extracted from seven French published dictionnaries of synonyms and then merged to obtain the files we worked on. We explain here why a discrete mathematic representation of synonymic relation is not sufficient to produce a semantic structure (that represents the different meanings of a term but also their overlapping). Then we propose a continuous representation (using data analysis) that ables the machine to produce for each term its semantic values. The system also labels these values with prototypic synonyms and detects synonyms that share different semantic "axes" with the headword. It should be noted that these semantic spaces are obtained automatically for each headword from a homogeneous list of synonyms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Barabanschikov, V. A., e A. V. Zhegallo. "Dynamics of Key Facial Points as an Indicator of the Credibility of Reported Information". Experimental Psychology (Russia) 14, n.º 2 (2021): 101–12. http://dx.doi.org/10.17759/exppsy.2021140207.

Texto completo da fonte
Resumo:
This research describes a method for studying the authenticity/unauthenticity of the information re- ported by people in video images. It is based on automatic tracking of coordinates of key points of a speaker’s face using OpenFace software. When processing the data, the multiple linear regression procedure is used. It was found that the dynamics of neighboring key points in the obtained models has a multidirectional char- acter, indicating the presence of a superposition of several dynamic structures, corresponding to the characteristic complex changes in the face position and facial expressions of the sitter. Their isolation is realized by means of the principal component analysis. It is shown, that the first 11 principal components describe 99.7% of the variability of the initial data. The correlation analysis between the number of credibility/confidence statements on the set of time intervals and the principal component loadings, allows to differentiate the dynamic structures of the face, connected with the assessments of credibility of the reported information. Automated analysis of face dynamics optimizes the process of collecting empirical data on the sitter’s appearance and their semantic structuring, as well as expands the range of predictors of the assessments of the truthfulness of the messages received.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Acampa, Giovanna, e Alessio Pino. "A Simplified Facility Management Tool for Condition Assessment through Economic Evaluation and Data Centralization: Branch to Core". Sustainability 15, n.º 8 (10 de abril de 2023): 6418. http://dx.doi.org/10.3390/su15086418.

Texto completo da fonte
Resumo:
The field of facility management, especially concerning condition assessment, is affected by two main issues: one is the incompleteness and heterogeneity of information transfer between the involved subjects; the other is the frequent lack of specific advanced skills needed for technically complex tools. The immediate consequences of this process inefficiency fall on economic and environmental aspects: the unavailability or incorrect structuring of data related to building conditions does not allow for making optimal choices concerning interventions on components. This paper attempts to provide a solution in this framework by presenting a methodology for simplified condition assessment, in which the evaluation of decay parameters draws from economic evaluation techniques, and which optimizes data collection, systematization, and elaboration, also integrating it with a mobile app for automatic data upload and centralization. The research underlying its development draws from decay evaluation criteria and national standards for the analysis and breakdown of buildings. The methodology was tested on a case study of the Cloister of Santa Croce in Florence, which also served as the client of the tool. The proposed methodology stands as an easily implementable integration to condition assessment for maintenance planning and building inspection activities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Cascella, Marco, Daniela Schiavo, Arturo Cuomo, Alessandro Ottaiano, Francesco Perri, Renato Patrone, Sara Migliarelli, Elena Giovanna Bignami, Alessandro Vittori e Francesco Cutugno. "Artificial Intelligence for Automatic Pain Assessment: Research Methods and Perspectives". Pain Research and Management 2023 (28 de junho de 2023): 1–13. http://dx.doi.org/10.1155/2023/6018736.

Texto completo da fonte
Resumo:
Although proper pain evaluation is mandatory for establishing the appropriate therapy, self-reported pain level assessment has several limitations. Data-driven artificial intelligence (AI) methods can be employed for research on automatic pain assessment (APA). The goal is the development of objective, standardized, and generalizable instruments useful for pain assessment in different clinical contexts. The purpose of this article is to discuss the state of the art of research and perspectives on APA applications in both research and clinical scenarios. Principles of AI functioning will be addressed. For narrative purposes, AI-based methods are grouped into behavioral-based approaches and neurophysiology-based pain detection methods. Since pain is generally accompanied by spontaneous facial behaviors, several approaches for APA are based on image classification and feature extraction. Language features through natural language strategies, body postures, and respiratory-derived elements are other investigated behavioral-based approaches. Neurophysiology-based pain detection is obtained through electroencephalography, electromyography, electrodermal activity, and other biosignals. Recent approaches involve multimode strategies by combining behaviors with neurophysiological findings. Concerning methods, early studies were conducted by machine learning algorithms such as support vector machine, decision tree, and random forest classifiers. More recently, artificial neural networks such as convolutional and recurrent neural network algorithms are implemented, even in combination. Collaboration programs involving clinicians and computer scientists must be aimed at structuring and processing robust datasets that can be used in various settings, from acute to different chronic pain conditions. Finally, it is crucial to apply the concepts of explainability and ethics when examining AI applications for pain research and management.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

La Russa, F. M., E. Grilli, F. Remondino, C. Santagati e M. Intelisano. "ADVANCED 3D PARAMETRIC HISTORIC CITY BLOCK MODELING COMBINING 3D SURVEYING, AI AND VPL". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-M-2-2023 (24 de junho de 2023): 903–10. http://dx.doi.org/10.5194/isprs-archives-xlviii-m-2-2023-903-2023.

Texto completo da fonte
Resumo:
Abstract. The presented research aims to define a parametric modelling methodology that allows, in short time and at a sustainable cost, the digital acquisition, modelling and semantic structuring of urban city blocks to facilitate 3D city modelling applied to historic centres. The methodology is based on field surveying and derives 3D data for the realisation of a parametric City Information Model (CIM). This is pursued through the adoption of parametric modelling as main method combined with AI procedures like supervised machine learning. In particular, the Visual Programming Language (VPL) Grasshopper is adopted as main working environment. The methodology proposed, called Scan-to-CIM, is developed to automate the cognitive operations of interpretation and input of surveying data performed in the field in order to create LoD4 city block models in a semi-automatic way. The proposed Scan-to-CIM methodology is applied to a city block located in the historic centre of Catania, Italy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Adanza Dopazo, Daniel, Lamine Mahdjoubi e Bill Gething. "A Method to Enable Automatic Extraction of Cost and Quantity Data from Hierarchical Construction Information Documents to Enable Rapid Digital Comparison and Analysis". Buildings 13, n.º 9 (8 de setembro de 2023): 2286. http://dx.doi.org/10.3390/buildings13092286.

Texto completo da fonte
Resumo:
Context: Despite the effort put into developing standards for structuring construction costs and the strong interest in the field, most construction companies still perform the process of data gathering and processing manually. This provokes inconsistencies, different criteria when classifying, misclassifications, and the process becomes very time-consuming, particularly in large projects. Additionally, the lack of standardization makes cost estimation and comparison tasks very difficult. Objective: The aim of this work was to create a method to extract and organize construction cost and quantity data into a consistent format and structure to enable rapid and reliable digital comparison of the content. Methods: The approach consisted of a two-step method: firstly, the system implemented data mining to review the input document and determine how it was structured based on the position, format, sequence, and content of descriptive and quantitative data. Secondly, the extracted data were processed and classified with a combination of data science and experts’ knowledge to fit a common format. Results: A large variety of information coming from real historical projects was successfully extracted and processed into a common format with 97.5% accuracy using a subset of 5770 assets located on 18 different files, building a solid base for analysis and comparison. Conclusions: A robust and accurate method was developed for extracting hierarchical project cost data to a common machine-readable format to enable rapid and reliable comparison and benchmarking.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Ruiz-Diaz, C. M., J. A. Gómez-Camperos e M. M. Hernández-Cely. "Flow pattern identification of liquid-liquid (oil and water) in vertical pipelines using machine learning techniques". Journal of Physics: Conference Series 2163, n.º 1 (1 de janeiro de 2022): 012001. http://dx.doi.org/10.1088/1742-6596/2163/1/012001.

Texto completo da fonte
Resumo:
Abstract Given the importance of process control in the petrochemical industry, there is a need to determine the behavior of the fluids inside the pipes. In this work a methodology is developed for the identification of flow patterns in vertical pipes with diameters between 0.01 m and 0.10 m, from the implementation of artificial intelligence techniques, for a liquid combination of two phases composed of oil with viscosity in the range of 792 Kg/m3 to 1823 Kg/m3 and water at room temperature. The predictive models generated in the structuring of the methodology were trained with 70% of data based on viscosity parameters, pipe diameter, volume fraction and surface velocities of the working fluids stored in a database. The remaining information, equivalent to 30% of the total, was used to develop the automatic model validation. The flow patterns identified by the intelligent system for oil and water flow, without considering the predominant substance, are churning, dispersed, very fine dispersion, transition flow, intermittent, and annular
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Li, Sixuan, Liang Zhang, Shiwei Liu, Richard Hubbard e Hui Li. "Surveillance of Noncommunicable Disease Epidemic Through the Integrated Noncommunicable Disease Collaborative Management System: Feasibility Pilot Study Conducted in the City of Ningbo, China". Journal of Medical Internet Research 22, n.º 7 (23 de julho de 2020): e17340. http://dx.doi.org/10.2196/17340.

Texto completo da fonte
Resumo:
Background Noncommunicable diseases (NCDs) have become the main public health concern worldwide. With rapid economic development and changes in lifestyles, the burden of NCDs in China is increasing dramatically every year. Monitoring is a critical measure for NCDs control and prevention. However, because of the lack of regional representativeness, unsatisfactory data quality, and inefficient data sharing and utilization, the existing surveillance systems and surveys in China cannot track the status and transition of NCDs epidemic. Objective To efficaciously track NCDs epidemic in China, this pilot program conducted in Ningbo city by the Chinese Center for Disease Control and Prevention (CDC) aimed to develop an innovative model for NCDs surveillance and management: the integrated noncommunicable disease collaborative management system (NCDCMS). Methods This Ningbo model was designed and developed through a 3-level (county/district, municipal, and provincial levels) direct reporting system based on the regional health information platform. The uniform data standards and interface specifications were established to connect different platforms and conduct data exchanges. The performance of the system was evaluated based on the 9 attributes of surveillance system evaluation framework recommended by the US CDC. Results NCDCMS allows automatic NCDs data exchanging and sharing via a 3-level public health data exchange platform in China. It currently covers 201 medical institutions throughout the city. Compared with previous systems, automatic popping up of the report card, automatic patient information extraction, and real-time data exchange process have highly improved the simplicity and timeliness of the system. The data quality meets the requirements to monitor the incidence trend of NCDs accurately, and the comprehensive data types obtained from the database (ie, directly from the 3-level platform on the data warehouse) also provide a useful information to conduct scientific studies. So far, 98.1% (201/205) of medical institutions across Ningbo having been involved in data exchanges with the model. Evaluations of the system performance showed that NCDCMS has high levels of simplicity, data quality, acceptability, representativeness, and timeliness. Conclusions NCDCMS completely reshaped the process of NCD surveillance reporting and had unique advantages, which include reducing the work burden of different stakeholders by data sharing and exchange, eliminating unnecessary redundancies, reducing the amount of underreporting, and structuring population-based cohorts. The Ningbo model will be gradually promoted elsewhere following this success of the pilot project, and is expected to be a milestone in NCDs surveillance, control, and prevention in China.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Shamaeva, Ekaterina F., e Anna K. Perevozchikova. "Analytical situational center of the region in the context of energy-ecological, socio-economic and infrastructural indicators". Geoinformatika, n.º 3 (29 de setembro de 2023): 81–92. http://dx.doi.org/10.47148/1609-364x-2023-3-81-92.

Texto completo da fonte
Resumo:
Today, geo-information systems play a special role in regional and industry management, which serve as an effective tool for modeling regional systems. This work presents the results of the development of a situation center, the task of which is to store and update data on the state of regional systems. The situation center serves as a convenient tool for structuring and presenting data. The information base for the formation of an array of initial data is the Federal State Statistics Service, from where statistical data on multi-level regional systems were uploaded in the context of energy-environmental, socio-economic and infrastructure indicators. The results of the study make it possible to present the analytical situational center as a system for managing a database of initial and analytical data on energy-environmental, socio-economic and infrastructure indicators, including a built-in computational and analytical module. The developed tool helps to reduce the time to search for the necessary indicator, builds graphs and maps, calculates the pace of a particular indicator. In addition, the tool can be constantly improved by adding new groups of indicators, infographics and automatic calculations. The study findings represent a contribution to understanding the practice of using spatial analysis tools. Work is performed within GUU grant (research № 1002-23).
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Moutinho, Thomas J., Benjamin C. Neubert, Matthew L. Jenior e Jason A. Papin. "Quantifying cumulative phenotypic and genomic evidence for procedural generation of metabolic network reconstructions". PLOS Computational Biology 18, n.º 2 (7 de fevereiro de 2022): e1009341. http://dx.doi.org/10.1371/journal.pcbi.1009341.

Texto completo da fonte
Resumo:
Genome-scale metabolic network reconstructions (GENREs) are valuable tools for understanding microbial metabolism. The process of automatically generating GENREs includes identifying metabolic reactions supported by sufficient genomic evidence to generate a draft metabolic network. The draft GENRE is then gapfilled with additional reactions in order to recapitulate specific growth phenotypes as indicated with associated experimental data. Previous methods have implemented absolute mapping thresholds for the reactions automatically included in draft GENREs; however, there is growing evidence that integrating annotation evidence in a continuous form can improve model accuracy. There is a need for flexibility in the structure of GENREs to better account for uncertainty in biological data, unknown regulatory mechanisms, and context-specificity associated with data inputs. To address this issue, we present a novel method that provides a framework for quantifying combined genomic, biochemical, and phenotypic evidence for each biochemical reaction during automated GENRE construction. Our method, Constraint-based Analysis Yielding reaction Usage across metabolic Networks (CANYUNs), generates accurate GENREs with a quantitative metric for the cumulative evidence for each reaction included in the network. The structuring of CANYUNs allows for the simultaneous integration of three data inputs while maintaining all supporting evidence for biochemical reactions that may be active in an organism. CANYUNs is designed to maximize the utility of experimental and annotation datasets and to ultimately assist in the curation of the reference datasets used for the automatic construction of metabolic networks. We validated CANYUNs by generating an E. coli K-12 model and compared it to the manually curated reconstruction iML1515. Finally, we demonstrated the use of CANYUNs to build a model by generating an E. coli Nissle CANYUNs model using novel phenotypic data that we collected. This method may address key challenges for the procedural construction of metabolic networks by leveraging uncertainty and redundancy in biological data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Rogushina, J. V. "Fuzzy data in semantic Wiki-resources: models, sources and processing methods". PROBLEMS IN PROGRAMMING, n.º 2 (junho de 2023): 67–83. http://dx.doi.org/10.15407/pp2023.02.067.

Texto completo da fonte
Resumo:
We analyze main types of dirty data processed by intelligente information systems, criteria of data classification and means of detection non-classical properties of data. Results of this analysis are represented by ontological model that contains taxonomy of classical and nonclassical data and knowledge-oriented methods of their transformation. Special attention is paid to semantically incorrect data that corresponds to vague knowledge. This ontological model intended to provide more effectively methods for transforming raw data into smart data suitable for automatic analysis, knowledge acquisition and reuse in other information systems. The ontological approach provides integration of the proposed model with other external ontologies that formalize characteristics of various methods and software tools that can be used fo data analysis (data mining, inductive inference, semantic queries, and instrimental tools for testing various aspects of the ontology quality, etc.). The work uses the experience of knowledge base developing of the portal version of the Great Ukrainian Encyclopedia e-VUE. This information resource is based on the semantic Wiki technology, it has a large volume, a complex structure and contains a large number of various heterogeneous information objects. Wiki resources are interesting from the point of view of collaborative processing the fuzzy data that describe heterogeneous information objects and knowledge structures. Due to the fact that the creation of this information resource involves a large number of specialists of various scientific fields, who have different areas of expertise and qualifications in use of knowledge-oriented technologies, there are many differences in the understanding of the rules for presenting and structuring data, and therefore a significant part of the Encyclopedia content needs additional verification of its correctness. Therefore, we need in formalized and scalable solutions for detection and processing various types of inconsistence, incompleteness and semantic incorrectness of data. The proposed approach can be useful for the creation of other large-scale resources based on both the semantic Wiki technology and other technological platforms for collaborative processing of distributed data and knowledge.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Rysová, Magdaléna. "Studying text coherence in Czech – a corpus-based analysis". Topics in Linguistics 18, n.º 2 (20 de dezembro de 2017): 36–47. http://dx.doi.org/10.1515/topling-2017-0009.

Texto completo da fonte
Resumo:
Abstract The paper deals with the field of Czech corpus linguistics and represents one of various current studies analysing text coherence through language interactions. It presents a corpusbased analysis of grammatical coreference and sentence information structure (in terms of contextual boundness) in Czech. It focuses on examining the interaction of these two language phenomena and observes where they meet to participate in text structuring. Specifically, the paper analyses contextually bound and non-bound sentence items and examines whether (and how often) they are involved in relations of grammatical coreference in Czech newspaper articles. The analysis is carried out on the language data of the Prague Dependency Treebank (PDT) containing 3,165 Czech texts. The results of the analysis are helpful in automatic text annotation - the paper presents how (or to what extent) the annotation of grammatical coreference may be used in automatic (pre-)annotation of sentence information structure in Czech. It demonstrates how accurately we may (automatically) assume the value of contextual boundness for the antecedent and anaphor (as the two participants of a grammatical coreference relation). The results of the paper demonstrate that the anaphor of grammatical coreference is automatically predictable - it is a non-contrastive contextually bound sentence item in 99.18% of cases. On the other hand, the value of contextual boundness of the antecedent is not so easy to estimate (according to the PDT, the antecedent is contextually non-bound in 37% of cases, non-contrastive contextually bound in 50% and contrastive contextually bound in 13% of cases).
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Czyzewski, Andrzej. "Optimizing medical personnel speech recognition models using speech synthesis and reinforcement learning". Journal of the Acoustical Society of America 154, n.º 4_supplement (1 de outubro de 2023): A202—A203. http://dx.doi.org/10.1121/10.0023271.

Texto completo da fonte
Resumo:
Text-to-Speech synthesis (TTS) can be used to generate training data for building Automatic Speech Recognition models (ASR). Access to medical speech data is because it is sensitive data that is difficult to obtain for privacy reasons. Speech can be synthesized by mimicking different accents, dialects, and speaking styles in a medical language. Reinforcement Learning (RL), in the context of ASR, can be used to optimize a model. A model can be trained to minimize errors in speech-to-text transcription, especially for technical medical terminology. In this case, the “reward” to the RL model can be negatively proportional to the number of transcription errors. The paper presents a method and experimental study from which it is concluded that the combination of TTS and RL can enable the creation of a speech recognition model suited to the specific needs of medical personnel, helping to expand the training data and optimize the model to minimize transcription errors. The learning process used reward functions based on Mean Opinion Score (MOS), a subjective metric for assessing speech quality, and Word Error Rate (WER), which evaluates the quality of speech-to-text transcription. [The Polish National Center for Research and Development (NCBR) supported the project: “ADMEDVOICE- Adaptive intelligent speech processing system of medical personnel with the structuring of test results and support of therapeutic process,” no. INFOSTRATEG4/0003/2022.]
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Sbrollini, Agnese, Chiara Leoni, Micaela Morettini, Massimo W. Rivolta, Cees A. Swenne, Luca Mainardi, Laura Burattini e Roberto Sassi. "Identification of Electrocardiographic Patterns Related to Mortality with COVID-19". Applied Sciences 14, n.º 2 (18 de janeiro de 2024): 817. http://dx.doi.org/10.3390/app14020817.

Texto completo da fonte
Resumo:
COVID-19 is an infectious disease that has greatly affected worldwide healthcare systems, due to the high number of cases and deaths. As COVID-19 patients may develop cardiac comorbidities that can be potentially fatal, electrocardiographic monitoring can be crucial. This work aims to identify electrocardiographic and vectorcardiographic patterns that may be related to mortality in COVID-19, with the application of the Advanced Repeated Structuring and Learning Procedure (AdvRS&LP). The procedure was applied to data from the “automatic computation of cardiovascular arrhythmic risk from electrocardiographic data of COVID-19 patients” (COVIDSQUARED) project to obtain neural networks (NNs) that, through 254 electrocardiographic and vectorcardiographic features, could discriminate between COVID-19 survivors and deaths. The NNs were validated by a five-fold cross-validation procedure and assessed in terms of the area under the curve (AUC) of the receiver operating characteristic. The features’ contribution to the classification was evaluated through the Local-Interpretable Model-Agnostic Explanations (LIME) algorithm. The obtained NNs properly discriminated between COVID-19 survivors and deaths (AUC = 84.31 ± 2.58% on hold-out testing datasets); the classification was mainly affected by the electrocardiographic-interval-related features, thus suggesting that changes in the duration of cardiac electrical activity might be related to mortality in COVID-19 cases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Rervyatskyy, I. Yu. "A relational database with pharmaceutical information: problems of creation and primary filling for provision of a qualitative statistical processing". Farmatsevtychnyi zhurnal, n.º 4 (10 de setembro de 2019): 23–31. http://dx.doi.org/10.32352/0367-3057.4.19.03.

Texto completo da fonte
Resumo:
The disclosure of pharmaceutical information is not intended to provide the possibility of further automated processing of the data provided by independent experts, and, accordingly, the choice of the methods of presentation is aimed to optimize visual perception by users. The aim of the work was to analyze in the internet-sources the availability of information in the appropriate format for automatic filling of the relational database. The subjects of the study were: ATC: ICD-10 (International Classification of Diseases); content of EF 9.8; classification of dosage forms by the Ministry of Health of Ukraine, FDA (Food and Drug Administration, USA), EMA (European Medicines Agency) and EphMRA (European Pharmaceutical Market Research Association). Methods used are review of information with the corresponding structure in the internet, processing of the information found using computer code. Approbation was carried out on the basis of the digital online system «Likypedia» (http://likypedia.zzz.com.ua; http://facebook.com/likypedia). The basic goals for the quality of the results of statistical processing of pharmaceutical information are formed. To achieve them, a list of required information for the initial download to the relational database is defined. The sources of the received information which was used for initial loading to the relational database are presented. The multicomponent record of pharmaceutical information is described. The list of dosage forms was formed on the basis of own practical experience, trends in the indication on packaging of medicines by manufacturers, electronic databases of pharmaceutical wholesalers, the titles of articles given in the EF 9.8 edition. Information from different sources about pharmaceutical dosage forms was analyzed and divided into five information blocks: made by manufacturer; its characteristic; for DF, which will be prepared with DF produced by the manufacturer; its characteristic; way of using DF. A variant has been developed and presented for structuring the information record of the drug dose in the relational database in several variants, which makes it possible to further carry out automated calculations, optimize the selection and sorting of data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Konovalov, V. A. "MODEL OF ILLEGAL ECONOMIC ACTIVITY IN ANTI-MONEY LAUNDERING PROCEEDS FROM CRIME". Vestnik komp'iuternykh i informatsionnykh tekhnologii, n.º 228 (junho de 2023): 41–53. http://dx.doi.org/10.14489/vkit.2023.06.pp.041-053.

Texto completo da fonte
Resumo:
In the interests of developing the theoretical provisions of the methodology for classifying the typologies of the risks of laundering proceeds from crime and financing of terrorism in big data from a variety of data sources of organizational systems, a study is being made of the signs of illegal economic activity. Theoretically substantiates and synthesizes a reference model of illegal economic activity of organizational systems. This model considers four stages of possible illegal economic activity of organizational systems. The stages have symbols: “institutions, organizations, structuring and actions”. The structure and composition of the elements of these stages has been analyzed. For the stage of establishment, an analysis of the sources of data necessary for counteraction was carried out, signs of illegal economic activity characteristic of this stage were identified. For the organization stage, it has been established that it is necessary to use an additional data source containing data on the competence of the head of the organizational system. For the organization stage, it was found that it is the most secretive, so it is necessary to further analyze the activity in managing systems in their telecommunications environment. The alphabet of signs of income laundering risks , presented in a linguistic, categorical form is considered. Synthesized and scientifically substantiated events in fragments of interactions of organizational systems. The categorical alphabet containing letters similar to the letters of the alphabet is considered, its formalization is carried out. A scientific substantiation and synthesis of models of individual typologies that form a generalized mathematical model of the typology of the risk of laundering proceeds from crime and the financing of terrorism has been carried out. Among these models, two have been identified that provide automatic classification of typologies of individual risks and selection of words in the Markov alphabet A±2, denoting objects. It is concluded that the categorical alphabets  or provide a classification of typologies of individual risks of laundering proceeds from crime and financing of terrorism in big data of organizational systems, in an automatic mode. The classification of typologies is possible at all stages of illegal activity; for this, it is necessary to use several sources of big data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Appazov, Eduard, Dmytro Krugliy, Serhii Zinchenko e Pavlo Nosov. "Choice of the Fractal Method For Visualization of Input Data While Designing Support Systems for Decision-Making by Navigator". Science and Innovation 17, n.º 5 (12 de outubro de 2021): 63–72. http://dx.doi.org/10.15407/scine17.05.063.

Texto completo da fonte
Resumo:
Introduction. The constant increase in the amount and intensity of traffic requires organization and precise management.Problem Statement. In the present-day conditions, when the number of vessels engaged on internal and external routes has been growing, without the vessel driver/navigator all alone are not physically able to assess the navigation situation and to make the right decision how to operate his vessel. The need to develop and to implement algorithms that help address the issue of navigation safety is an important task, especially when it comes to the management of groups of vessels. The main approaches that allow generalizing the information flows to ensure continuous and safe navigation are the formation of a structured system of processing and evaluation of input factors and related output parameters. This enables controlling the ergatic system of vessel, given a significant number of factors.Purpose. The purpose of this research is to create new approaches to controlling the vessel ergatic system for making an optimal and timely decision.Materials and Methods. Fractal methods for representation of the primary information and applied computer programs of mathematical simulation have been used.Results. The proposed model of information processing as part of the vessel ergatic system is designed to comp­rehensively ensure the safety of vessels, while providing control and optimization of both operational and organizational parameters and diagnostic functions, with the ability to predict and to prevent failures of the vessel engineering system.Conclusions. The applicability of general algorithms for the processing of information and its structuring according to the degree of impact has been shown. The application of these approaches solves the problem of overloading the navigator with excessive navigational information and reduces decision-making time. The developedalgorithm allows creating an automatic control system for groups of vessels in real conditions of difficult navigation environment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

KHEMAKHEM, AIDA, BILEL GARGOURI, ABDELMAJID BEN HAMADOU e GIL FRANCOPOULO. "ISO standard modeling of a large Arabic dictionary". Natural Language Engineering 22, n.º 6 (7 de setembro de 2015): 849–79. http://dx.doi.org/10.1017/s1351324915000224.

Texto completo da fonte
Resumo:
AbstractIn this paper, we address the problem of the large coverage dictionaries of Arabic language usable both for direct human reading and automatic Natural Language Processing. For these purposes, we propose a normalized and implemented modeling, based on Lexical Markup Framework (LMF-ISO 24613) and Data Registry Category (DCR-ISO 12620), which allows a stable and well-defined interoperability of lexical resources through a unification of the linguistic concepts. Starting from the features of the Arabic language, and due to the fact that a large range of details and refinements need to be described specifically for Arabic, we follow a finely structuring strategy. Besides its richness in morphology, syntax and semantics knowledge, our model includes all the Arabic morphological patterns to generate the inflected forms from a given lemma and highlights the syntactic–semantic relations. In addition, an appropriate codification has been designed for the management of all types of relationships among lexical entries and their related knowledge. According to this model, a dictionary named El Madar1has been built and is now publicly available on line. The data are managed by a user-friendly Web-based lexicographical workstation. This work has not been done in isolation, but is the result of a collaborative effort by an international team mainly within the ISO network during a period of eight years.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Irfan, Irfan, Sofendi Sofendi e Machdalena Vianty. "Technological knowledge application on academic writing English education study program students". English Review: Journal of English Education 9, n.º 1 (15 de dezembro de 2020): 157–66. http://dx.doi.org/10.25134/erjee.v9i1.3788.

Texto completo da fonte
Resumo:
Technological knowledge plays a role in academic writing such as assisting in finding suitable references, checking plagiarism, and publishing the article. However, technological knowledge does not always provide benefits in academic writing. Technological knowledge may affect the writers’ mentality to take shortcut in finishing and checking their writing. The objectives of this study were: (1) to find out the technological knowledge level of English education study program students, (2) to find out how English education study program students applied their technological knowledge in academic writing, and (3) to find out the problems English education study program students encountered in applying their technological knowledge in academic writing. The study’s participant was 13 students from class B 2016 Palembang of English Education Undergraduate Program along with the latest lecturer that teaches them writing. This research used descriptive qualitative design. The data were collected by questionnaire, observation, interview, and document gathering. Percentage calculation, transcribing, and triangulation were used to analyze data. The findings showed that (1) The technological knowledge level of the participants is level two Technical Maxim, (2) the participants applied technological knowledge on academic particularly in finding references and structuring idea, and (3) the participants have several problems in applying technological knowledge in academic writing, such as in citing references correctly, avoiding tendency to copy-and-paste, structural error due to using automatic correction, and paper formatting.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Hansen, Matthias, André Pomp, Kemal Erki e Tobias Meisen. "Data-Driven Recognition and Extraction of PDF Document Elements". Technologies 7, n.º 3 (11 de setembro de 2019): 65. http://dx.doi.org/10.3390/technologies7030065.

Texto completo da fonte
Resumo:
In the age of digitalization, the collection and analysis of large amounts of data is becoming increasingly important for enterprises to improve their businesses and processes, such as the introduction of new services or the realization of resource-efficient production. Enterprises concentrate strongly on the integration, analysis and processing of their data. Unfortunately, the majority of data analysis focuses on structured and semi-structured data, although unstructured data such as text documents or images account for the largest share of all available enterprise data. One reason for this is that most of this data is not machine-readable and requires dedicated analysis methods, such as natural language processing for analyzing textual documents or object recognition for recognizing objects in images. Especially in the latter case, the analysis methods depend strongly on the application. However, there are also data formats, such as PDF documents, which are not machine-readable and consist of many different document elements such as tables, figures or text sections. Although the analysis of PDF documents is a major challenge, they are used in all enterprises and contain various information that may contribute to analysis use cases. In order to enable their efficient retrievability and analysis, it is necessary to identify the different types of document elements so that we are able to process them with tailor-made approaches. In this paper, we propose a system that forms the basis for structuring unstructured PDF documents, so that the identified document elements can subsequently be retrieved and analyzed with tailor-made approaches. Due to the high diversity of possible document elements and analysis methods, this paper focuses on the automatic identification and extraction of data visualizations, algorithms, other diagram-like objects and tables from a mixed document body. For that, we present two different approaches. The first approach uses methods from the area of deep learning and rule-based image processing whereas the second approach is purely based on deep learning. To train our neural networks, we manually annotated a large corpus of PDF documents with our own annotation tool, of which both are being published together with this paper. The results of our extraction pipeline show that we are able to automatically extract graphical items with a precision of 0.73 and a recall of 0.8. For tables, we reach a precision of 0.78 and a recall of 0.94.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Gebauer, Jana, Florian Gruber, Wilhelm Holfeld, Wulf Grählert e Andrés Fabián Lasagni. "Prediction of the Quality of Thermally Sprayed Copper Coatings on Laser-Structured CFRP Surfaces Using Hyperspectral Imaging". Photonics 9, n.º 7 (21 de junho de 2022): 439. http://dx.doi.org/10.3390/photonics9070439.

Texto completo da fonte
Resumo:
With the progressive replacement of metallic parts by high-performance fiber-reinforced plastic (FRP) components, typical properties of metals are required to be placed on the material’s surface. A metallic coating applied to the FRP surface by thermal spraying, for instance, can fulfill these requirements, including electrical conductivity. In this work, laser pre-treatments are utilized for increasing the bond strength of metallic coatings. However, due to the high-precision material removal using pulsed laser radiation, the production-related heterogeneous fiber distribution in FRP leads to variations in the structuring result and consequently to different qualities of the subsequent coating. In this study, hyperspectral imaging (HSI) technologies in conjunction with deep learning were applied to carbon fiber-reinforced plastics (CFRP) structured by nanosecond pulsed laser. HSI-based prediction models could be developed, which allow for reliable prediction, with an accuracy of around 80%, of which laser-treated areas will successfully be coated and which will not. By using this objective and automatic evaluation, it is possible to avoid large amounts of rejects before further processing the parts and also to optimize the adhesion of coatings. Spatially resolved data enables local reworking during the laser process, making it feasible for the manufacturing process to achieve zero waste.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Chatterjee, Ayan, Andreas Prinz, Martin Gerdes e Santiago Martinez. "An Automatic Ontology-Based Approach to Support Logical Representation of Observable and Measurable Data for Healthy Lifestyle Management: Proof-of-Concept Study". Journal of Medical Internet Research 23, n.º 4 (9 de abril de 2021): e24656. http://dx.doi.org/10.2196/24656.

Texto completo da fonte
Resumo:
Background Lifestyle diseases, because of adverse health behavior, are the foremost cause of death worldwide. An eCoach system may encourage individuals to lead a healthy lifestyle with early health risk prediction, personalized recommendation generation, and goal evaluation. Such an eCoach system needs to collect and transform distributed heterogenous health and wellness data into meaningful information to train an artificially intelligent health risk prediction model. However, it may produce a data compatibility dilemma. Our proposed eHealth ontology can increase interoperability between different heterogeneous networks, provide situation awareness, help in data integration, and discover inferred knowledge. This “proof-of-concept” study will help sensor, questionnaire, and interview data to be more organized for health risk prediction and personalized recommendation generation targeting obesity as a study case. Objective The aim of this study is to develop an OWL-based ontology (UiA eHealth Ontology/UiAeHo) model to annotate personal, physiological, behavioral, and contextual data from heterogeneous sources (sensor, questionnaire, and interview), followed by structuring and standardizing of diverse descriptions to generate meaningful, practical, personalized, and contextual lifestyle recommendations based on the defined rules. Methods We have developed a simulator to collect dummy personal, physiological, behavioral, and contextual data related to artificial participants involved in health monitoring. We have integrated the concepts of “Semantic Sensor Network Ontology” and “Systematized Nomenclature of Medicine—Clinical Terms” to develop our proposed eHealth ontology. The ontology has been created using Protégé (version 5.x). We have used the Java-based “Jena Framework” (version 3.16) for building a semantic web application that includes resource description framework (RDF) application programming interface (API), OWL API, native tuple store (tuple database), and the SPARQL (Simple Protocol and RDF Query Language) query engine. The logical and structural consistency of the proposed ontology has been evaluated with the “HermiT 1.4.3.x” ontology reasoner available in Protégé 5.x. Results The proposed ontology has been implemented for the study case “obesity.” However, it can be extended further to other lifestyle diseases. “UiA eHealth Ontology” has been constructed using logical axioms, declaration axioms, classes, object properties, and data properties. The ontology can be visualized with “Owl Viz,” and the formal representation has been used to infer a participant’s health status using the “HermiT” reasoner. We have also developed a module for ontology verification that behaves like a rule-based decision support system to predict the probability for health risk, based on the evaluation of the results obtained from SPARQL queries. Furthermore, we discussed the potential lifestyle recommendation generation plan against adverse behavioral risks. Conclusions This study has led to the creation of a meaningful, context-specific ontology to model massive, unintuitive, raw, unstructured observations for health and wellness data (eg, sensors, interviews, questionnaires) and to annotate them with semantic metadata to create a compact, intelligible abstraction for health risk predictions for individualized recommendation generation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Ouazzani, Sohaib, Arnaud Lemmers, Federico Martinez, Raphael Kindt, Olivier Le Moine, Myriam Delhaye, Marianna Arvanitakis, Pieter Demetter, Jacques Devière e Pierre Eisendrath. "Implementation of colonoscopy quality monitoring in a Belgian university hospital with integrated computer-based extraction of adenoma detection rate". Endoscopy International Open 09, n.º 02 (fevereiro de 2021): E197—E202. http://dx.doi.org/10.1055/a-1326-1179.

Texto completo da fonte
Resumo:
Abstract Background and study aims Quality in colonoscopy has been promoted in last decade with definition of different quality indicators (QI) as benchmarks. Currently, automatized monitoring systems are lacking, especially for merging pathologic and endoscopic data, which limits quality monitoring implementation in daily practice. We describe an adapted endoscopy reporting system that allows continuous QI recording, with automatic pathological data inclusion. Material and methods We locally adapted a reporting system for colonoscopy by adding and structuring in a dedicated tab selected key QI. Endoscopic data from a reporting system and pathological results were extracted and merged in a separate database. During the initial period of use, performing physicians were encouraged to complete the dedicated tab on a voluntary basis. In a second stage, completing of the tab was made mandatory. The completeness of QI recording was evaluated across both periods. Performance measures for all endoscopists were compared to global results for the department and published targets. Results During the second semester of 2017, a total of 1827 colonoscopies were performed with a QI tab completed in 100 % of cases. Among key QI, the cecal intubation rate was 93.8 %, the rate of colonoscopies with adequate preparation was 90.7 %, and the adenoma detection rate was 29.8 % considering all colonoscopies, irrespective of indication; 28.8 % considering screening procedures; and 36.6 % in colonoscopies performed in people older than age 50 years. Conclusion This study shows that quality monitoring for colonoscopy can be easily implemented with limited human resources by adapting a reporting system and linking it to a pathology database.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Pham, D. T., S. S. Dimov e R. M. Setchi. "Concurrent Engineering: a tool for collaborative working". Human Systems Management 18, n.º 3-4 (29 de dezembro de 1999): 213–24. http://dx.doi.org/10.3233/hsm-1999-183-406.

Texto completo da fonte
Resumo:
Global competition, customer-driven product customisation, accelerated product obsolescence and continued demands for cost savings are forcing companies to look for new ways of working. Technology advances alone are no longer sufficient to deliver the required improvements to compete and survive in this new environment. Companies need to revise their traditional technologies in a way that allows previously serial engineering tasks to be done concurrently and creates the needed pre-requisites for collaborative working. Concurrent Engineering can be regarded as a form of process re-engineering and as the main enabling technology ensuring efficient operation of distributed enterprises. Concurrency in performing different design and manufacturing activities presents an opportunity to compress the overall product development time whilst opening up opportunities to be creative by providing more time for design iterations. This paper describes three different applications of Concurrent Engineering methodology that facilitate collaborative working and sharing and re-use of distributed engineering data. These are: – an approach for structuring manufacturing information and maximising the information-carrying capacity of 3D CAD models; – a system for analysing 3D assembly models and extracting assembly related data required for automatic generation of assembly strategies; – an approach for developing product support systems. All applications have been developed within the framework of EC-funded projects, in particular: Brite-Euram project CT92–0158 “Advanced Manufacturing Information System for the Designer (AMANIS)”, INCO-Copernicus project CP94–0510 “Advanced Robot Assembly (ROBAS)”, INCO-Copernicus project CP96–0231 “Intelligent Product Manuals (ProManual)” and ERDF (Industrial South Wales) technology demonstration project “Intelligent Product Manuals for SMEs”.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Lasorella, M., e E. Cantatore. "3D MODELS CITYGML-BASED COMBINED WITH TECHNICAL DECISION SUPPORT SYSTEM FOR THE SETTING UP OF DIGITAL CONSERVATION PLANS OF HISTORIC DISTRICTS". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-M-2-2023 (24 de junho de 2023): 911–18. http://dx.doi.org/10.5194/isprs-archives-xlviii-m-2-2023-911-2023.

Texto completo da fonte
Resumo:
Abstract. The setting up of recovery plans for historic districts requires a multi-level and multi-thematic process for their analysis and diagnosis to determine classes of priorities and interventions for buildings at the district scale of relevance. Traditional tools and protocols have already highlighted operative complexity and expensive activities, affecting the organicity and effectiveness in interpreting data. On the other hand, recent scientific and practical activities based on the use of parametric Digital Models and Informative Systems have highlighted their advantages in standardizing complex issues and knowledge. Recent work by the authors has defined the structured organization of technical knowledge for the creation of a digital recovery plan using Informative Parametric Models, based on descriptors, and primary and secondary factors. These aim at converting properties and information in qualitative and quantitative data, and then structuring dependencies on descriptors and primary factors, according to thematic taxonomies, existent ontologies for the geometric and semantic representation of urban and architectural entities, thematic normative and established approaches for the recovery of cultural and landscape heritage. Thus, the present work shows a workflow for the semi-automatic setting up of intervention classes for architectures in historic districts in Italy. It is structured on CityGML-based models, coherently implemented with a Technical Decision-Support System (T-DSS). Specifically, the T-DSS is determined considering the relations among thematic normative: UNI 11182, UNI/CEN TS 17385:2019, and Italian Consolidated Law on Building. The workflow is finally tested in the historic district of Ascoli Satriano, in the Apulia Region (South of Italy).
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Movia, A., A. Beinat e T. Sandri. "LAND USE CLASSIFICATION FROM VHR AERIAL IMAGES USING INVARIANT COLOUR COMPONENTS AND TEXTURE". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (21 de junho de 2016): 311–17. http://dx.doi.org/10.5194/isprs-archives-xli-b7-311-2016.

Texto completo da fonte
Resumo:
Very high resolution (VHR) aerial images can provide detailed analysis about landscape and environment; nowadays, thanks to the rapid growing airborne data acquisition technology an increasing number of high resolution datasets are freely available. <br><br> In a VHR image the essential information is contained in the red-green-blue colour components (RGB) and in the texture, therefore a preliminary step in image analysis concerns the classification in order to detect pixels having similar characteristics and to group them in distinct classes. Common land use classification approaches use colour at a first stage, followed by texture analysis, particularly for the evaluation of landscape patterns. Unfortunately RGB-based classifications are significantly influenced by image setting, as contrast, saturation, and brightness, and by the presence of shadows in the scene. The classification methods analysed in this work aim to mitigate these effects. The procedures developed considered the use of invariant colour components, image resampling, and the evaluation of a RGB texture parameter for various increasing sizes of a structuring element. <br><br> To identify the most efficient solution, the classification vectors obtained were then processed by a K-means unsupervised classifier using different metrics, and the results were compared with respect to corresponding user supervised classifications. <br><br> The experiments performed and discussed in the paper let us evaluate the effective contribution of texture information, and compare the most suitable vector components and metrics for automatic classification of very high resolution RGB aerial images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Movia, A., A. Beinat e T. Sandri. "LAND USE CLASSIFICATION FROM VHR AERIAL IMAGES USING INVARIANT COLOUR COMPONENTS AND TEXTURE". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (21 de junho de 2016): 311–17. http://dx.doi.org/10.5194/isprsarchives-xli-b7-311-2016.

Texto completo da fonte
Resumo:
Very high resolution (VHR) aerial images can provide detailed analysis about landscape and environment; nowadays, thanks to the rapid growing airborne data acquisition technology an increasing number of high resolution datasets are freely available. &lt;br&gt;&lt;br&gt; In a VHR image the essential information is contained in the red-green-blue colour components (RGB) and in the texture, therefore a preliminary step in image analysis concerns the classification in order to detect pixels having similar characteristics and to group them in distinct classes. Common land use classification approaches use colour at a first stage, followed by texture analysis, particularly for the evaluation of landscape patterns. Unfortunately RGB-based classifications are significantly influenced by image setting, as contrast, saturation, and brightness, and by the presence of shadows in the scene. The classification methods analysed in this work aim to mitigate these effects. The procedures developed considered the use of invariant colour components, image resampling, and the evaluation of a RGB texture parameter for various increasing sizes of a structuring element. &lt;br&gt;&lt;br&gt; To identify the most efficient solution, the classification vectors obtained were then processed by a K-means unsupervised classifier using different metrics, and the results were compared with respect to corresponding user supervised classifications. &lt;br&gt;&lt;br&gt; The experiments performed and discussed in the paper let us evaluate the effective contribution of texture information, and compare the most suitable vector components and metrics for automatic classification of very high resolution RGB aerial images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Mohorianu, Irina, e Vincent Moulton. "Revealing Biological Information Using Data Structuring and Automated Learning". Recent Patents on DNA & Gene Sequences 4, n.º 3 (1 de novembro de 2010): 181–91. http://dx.doi.org/10.2174/187221510794751668.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Jimeno Yepes, Antonio, e Karin Verspoor. "Mutation extraction tools can be combined for robust recognition of genetic variants in the literature". F1000Research 3 (10 de junho de 2014): 18. http://dx.doi.org/10.12688/f1000research.3-18.v2.

Texto completo da fonte
Resumo:
As the cost of genomic sequencing continues to fall, the amount of data being collected and studied for the purpose of understanding the genetic basis of disease is increasing dramatically. Much of the source information relevant to such efforts is available only from unstructured sources such as the scientific literature, and significant resources are expended in manually curating and structuring the information in the literature. As such, there have been a number of systems developed to target automatic extraction of mutations and other genetic variation from the literature using text mining tools. We have performed a broad survey of the existing publicly available tools for extraction of genetic variants from the scientific literature. We consider not just one tool but a number of different tools, individually and in combination, and apply the tools in two scenarios. First, they are compared in an intrinsic evaluation context, where the tools are tested for their ability to identify specific mentions of genetic variants in a corpus of manually annotated papers, the Variome corpus. Second, they are compared in an extrinsic evaluation context based on our previous study of text mining support for curation of the COSMIC and InSiGHT databases. Our results demonstrate that no single tool covers the full range of genetic variants mentioned in the literature. Rather, several tools have complementary coverage and can be used together effectively. In the intrinsic evaluation on the Variome corpus, the combined performance is above 0.95 in F-measure, while in the extrinsic evaluation the combined recall performance is above 0.71 for COSMIC and above 0.62 for InSiGHT, a substantial improvement over the performance of any individual tool. Based on the analysis of these results, we suggest several directions for the improvement of text mining tools for genetic variant extraction from the literature.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Orazbayeva, F., А. Н. Сарбасова e M. Shegebayev. "THE USE OF MEDIA TEXTS IN THE BUSINESS KAZAKH LANGUAGE". Tiltanym, n.º 2 (30 de junho de 2023): 190–99. http://dx.doi.org/10.55491/2411-6076-2023-2-190-199.

Texto completo da fonte
Resumo:
The rationale of this study proceeds from the need to teach business language on the example of analyzing a business media text from the point of view of pragmatic-professional and linguistic orientation, considering a high level of student interest in improving business communication skills. The aim of this research is to study the scope and resources of business media texts in terms of vocabulary replenishment and formation of text analytics skills, including the author's narrative strategies through explicit and implicit meanings, hypertext links, and tone of expression. The methodological framework of this work is based on the theoretical understanding of interdisciplinary research in the field of media linguistics, comparative study of business and economic languages, structural and linguistic analysis of business media text, analysis of topics, micro topics, and key structural elements. This article proposes a methodology for analyzing text from the business field in order to form the pragma-communicative competence of students in Kazakhstani universities. It also suggests strategies for effective text analytics by structuring the stages and considering the implementation of a certain stage by the teacher on professional practice, focuses on explicit and implicit components, narratological features, the study of terminological characteristics, the lexical base for the implementation of professional activities and further implementation in the field of business communication. The materials used in this work can be applied to form new templates for media text analytics, study business language, develop technologies based on machine learning, compile terminographic sources and data corpora for teaching business language to students, introduce automatic mechanisms for the implementation of text analytics into practice, form the stages of media text analysis and develop new promising methods for the implementation of linguistic learning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Adolphy, Sebastian, Hendrik Grosser, Lucas Kirsch e Rainer Stark. "Method for Automated Structuring of Product Data and its Applications". Procedia CIRP 38 (2015): 153–58. http://dx.doi.org/10.1016/j.procir.2015.07.063.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia