Academic literature on the topic 'Automated Data Capture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automated Data Capture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automated Data Capture"

1

Priestnall, G., R. E. Marston, and D. G. Elliman. "Arrowhead recognition during automated data capture." Pattern Recognition Letters 17, no. 3 (March 1996): 277–86. http://dx.doi.org/10.1016/0167-8655(95)00117-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Myler, H. R., and A. J. Gonzalez. "Automated design data capture using relaxation techniques." ACM SIGART Bulletin, no. 108 (April 1989): 169–70. http://dx.doi.org/10.1145/63266.63301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Asuncion, Hazeline U. "Automated data provenance capture in spreadsheets, with case studies." Future Generation Computer Systems 29, no. 8 (October 2013): 2169–81. http://dx.doi.org/10.1016/j.future.2013.04.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Damova, Mariana. "Linked Open Data Prototype of the Historical Archive of the European Commission." Archiving Conference 2020, no. 1 (April 7, 2020): 92–97. http://dx.doi.org/10.2352/issn.2168-3204.2020.1.0.92.

Full text
Abstract:
The European Cultural Heritage Strategy for the 21st century has led to an increased demand for fast, efficient and faithful 3D digitization technologies for cultural heritage artefacts. Yet, unlike the digital acquisition of cultural goods in 2D which is widely used and automated today, 3D digitization often still requires significant manual intervention, time and money. To overcome this, the authors have developed CultLab3D, the world's first fully automatic 3D mass digitization technology for collections of three-dimensional objects. 3D scanning robots such as the CultArm3D-P are specifically designed to automate the entire 3D digitization process thus allowing to capture and archive objects on a large-scale and produce highly accurate photo-realistic representations.
APA, Harvard, Vancouver, ISO, and other styles
5

Camargo, Jonathan, Aditya Ramanathan, Noel Csomay-Shanklin, and Aaron Young. "Automated gap-filling for marker-based biomechanical motion capture data." Computer Methods in Biomechanics and Biomedical Engineering 23, no. 15 (July 11, 2020): 1180–89. http://dx.doi.org/10.1080/10255842.2020.1789971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Crichton, D. J., L. Cinquini, H. Kincaid, A. Mahabal, A. Altinok, K. Anton, M. Colbert, et al. "From space to biomedicine: Enabling biomarker data science in the cloud." Cancer Biomarkers 33, no. 4 (April 18, 2022): 479–88. http://dx.doi.org/10.3233/cbm-210350.

Full text
Abstract:
NASA’s Jet Propulsion Laboratory (JPL) is advancing research capabilities for data science with two of the National Cancer Institute’s major research programs, the Early Detection Research Network (EDRN) and the Molecular and Cellular Characterization of Screen-Detected Lesions (MCL), by enabling data-driven discovery for cancer biomarker research. The research team pioneered a national data science ecosystem for cancer biomarker research to capture, process, manage, share, and analyze data across multiple research centers. By collaborating on software and data-driven methods developed for space and earth science research, the biomarker research community is heavily leveraging similar capabilities to support the data and computational demands to analyze research data. This includes linking diverse data from clinical phenotypes to imaging to genomics. The data science infrastructure captures and links data from over 1600 annotations of cancer biomarkers to terabytes of analysis results on the cloud in a biomarker data commons known as “LabCAS”. As the data increases in size, it is critical that automated approaches be developed to “plug” laboratories and instruments into a data science infrastructure to systematically capture and analyze data directly. This includes the application of artificial intelligence and machine learning to automate annotation and scale science analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Crichton, D. J., L. Cinquini, H. Kincaid, A. Mahabal, A. Altinok, K. Anton, M. Colbert, et al. "From space to biomedicine: Enabling biomarker data science in the cloud." Cancer Biomarkers 33, no. 4 (April 18, 2022): 479–88. http://dx.doi.org/10.3233/cbm-210350.

Full text
Abstract:
NASA’s Jet Propulsion Laboratory (JPL) is advancing research capabilities for data science with two of the National Cancer Institute’s major research programs, the Early Detection Research Network (EDRN) and the Molecular and Cellular Characterization of Screen-Detected Lesions (MCL), by enabling data-driven discovery for cancer biomarker research. The research team pioneered a national data science ecosystem for cancer biomarker research to capture, process, manage, share, and analyze data across multiple research centers. By collaborating on software and data-driven methods developed for space and earth science research, the biomarker research community is heavily leveraging similar capabilities to support the data and computational demands to analyze research data. This includes linking diverse data from clinical phenotypes to imaging to genomics. The data science infrastructure captures and links data from over 1600 annotations of cancer biomarkers to terabytes of analysis results on the cloud in a biomarker data commons known as “LabCAS”. As the data increases in size, it is critical that automated approaches be developed to “plug” laboratories and instruments into a data science infrastructure to systematically capture and analyze data directly. This includes the application of artificial intelligence and machine learning to automate annotation and scale science analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Schmoker, Bethany A., Shane Urban, and Arek J. Wiktor. "112 Developing Outpatient Registry to Capture Data Post Hospitalization." Journal of Burn Care & Research 43, Supplement_1 (March 23, 2022): S73. http://dx.doi.org/10.1093/jbcr/irac012.115.

Full text
Abstract:
Abstract Introduction Per the 2019 ABA reverification requirements, a burn center must see >75% of all inpatients (IP) who require an outpatient (OP) follow-up after discharge. In prior years, we utilized the inpatient registry and built a report to track patient follow-up. With the report, we were able to compare the number of Burn Clinic return patients against admissions to get the percentage. This process required hours of focused effort. We sought to optimize the process for determining IP follow-up at our ABA verified burn center. In addition, we hoped to better quantify the efficacy of our OP clinic. Methods An OP registry was developed in December 2019 utilizing an automated report from our electronic medical record (EMR) and imported into a custom built, secure, web-based software platform designed to support data capture for research studies. Employing various automation techniques, we were able to eliminate the need for manual abstraction by our burn registry team. Metrics tracked in the OP registry included: type of patient visit (New Patient, Return Patient, and Telehealth), diagnoses, zip-codes of patient residence, payer methods, and total number of clinic encounters per year. We collected data from January 2020 through the present, with 2020 being the first full year in the OP registry. The initial effort required to design, automate, and import data was approximately 18 hours. The report import takes approximately 5 minutes. Results The OP registry has given us the ability to create a multitude of graphs from the OP clinic data, like the one shown. During the review period our OP clinic saw patients from 19 different US states, encompassing 2,710 total OP visits. The median number of monthly OP clinic visits was 235 [IQR 210-246], see graph 1. The median number of clinic visits per patient was 2 [IQR 1-4]. The majority of clinic visits were return patients (55%, n = 1595), new patients (31%, n = 914), and telehealth visits (14%, n = 399). Finally, our analysis of the OP Clinic Registry demonstrated that we saw 82% (309/374) of inpatients that required follow-up care, exceeding the expected 75% by the ABA. Conclusions The creation of an automated OP registry can assist the tracking of discharged patients and reduce the amount of effort needed to track ABA required metrics. In addition, this OP registry can be expanded to track both IP and OP outcomes. This is crucial for quality improvement for the burn program as a whole.
APA, Harvard, Vancouver, ISO, and other styles
9

Wiltshire, S. E., D. G. Morris, and M. A. Beran. "Digital Data Capture and Automated Overlay Analysis for Basin Characteristic Calculation." Cartographic Journal 23, no. 1 (June 1986): 60–65. http://dx.doi.org/10.1179/caj.1986.23.1.60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Udoka, Silvanus J. "Automated data capture techniques: A prerequisite for effective integrated manufacturing systems." Computers & Industrial Engineering 21, no. 1-4 (January 1991): 217–21. http://dx.doi.org/10.1016/0360-8352(91)90091-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Automated Data Capture"

1

Brotherton, Jason Alan. "Enriching everyday activities through the automated capture and access of live experiences : eClass: building, observing and understanding the impact of capture and access in an educational domain." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/8143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Boxall, Guy. "'Diversification of automatic identification and data capture technologies with Omron Corporation'." Thesis, University of Warwick, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bhattacharjee, Partha Sarathi S. M. Massachusetts Institute of Technology. "VacSeen : semantically enriched automatic identification and data capture for improved vaccine logistics." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107582.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, System Design and Management Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 79-82).
Vaccines are globally recognized as a critical public health intervention. Routine immunization coverage in large parts of the developing world is around 80%. Technology and policy initiatives are presently underway to improve vaccine access in such countries. Efforts to deploy AIDC technologies, such as barcodes, on vaccine packaging in developing countries are currently ongoing under the aegis of the 'Decade of Vaccines' initiative by key stakeholders. Such a scenario presents an opportunity to evaluate novel approaches for enhancing vaccine access. In this thesis I report the development of VacSeen, a Semantic Web technology-enabled platform for improving vaccine access in developing countries. Furthermore, I report results of evaluation of a suite of constituent software and hardware tools pertaining to facilitating equitable vaccine access in resource-constrained settings through data linkage and temperature sensing. I subsequently discuss the value of such linkage and approaches to implementation using concepts from technology, policy, and systems analysis.
by Partha Sarathi Bhattacharjee.
S.M. in Technology and Policy
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
4

Nachabe, Ismail Lina. "Automatic sensor discovery and management to implement effective mechanism for data fusion and data aggregation." Thesis, Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0021/document.

Full text
Abstract:
Actuellement, des descriptions basées sur de simples schémas XML sont utilisées pour décrire un capteur/actuateur et les données qu’il mesure et fournit. Ces schémas sont généralement formalisés en utilisant le langage SensorML (Sensor Model Language), ne permettant qu’une description hiérarchique basique des attributs des objets sans aucune notion de liens sémantiques, de concepts et de relations entre concepts. Nous pensons au contraire que des descriptions sémantiques des capteurs/actuateurs sont nécessaires au design et à la mise en œuvre de mécanismes efficaces d’inférence, de fusion et de composition de données. Cette ontologie sémantique permettra de masquer l’hétérogénéité des données collectées et facilitera leur fusion et leur composition au sein d’un environnement de gestion de capteur similaire à celui d’une architecture ouverte orientée services. La première partie des travaux de cette thèse porte donc sur la conception et la validation d’une ontologie sémantique légère, extensible et générique de description des données fournies par un capteur/actuateur. Cette description ontologique de données brutes devra être conçue : • d’une manière extensible et légère afin d’être applicable à des équipements embarqués hétérogènes, • comme sous élément d’une ontologie de plus haut niveau (upper level ontology) utilisée pour modéliser les capteurs et actuateurs (en tant qu’équipements et non plus de données fournies), ainsi que les informations mesurées (information veut dire ici donnée de plus haut niveau issue du traitement et de la fusion des données brutes). La seconde partie des travaux de cette thèse portera sur la spécification et la qualification : • d’une architecture générique orientée service (SOA) permettant la découverte et la gestion d’un capteur/actuateur, et des données qu’il fournit (incluant leurs agrégation et fusion en s’appuyant sur les mécanismes de composition de services de l’architecture SOA), à l’identique d’un service composite de plus haut niveau, • d’un mécanisme amélioré de collecte de données à grande échelle, au dessus de cette ontologie descriptive. L’objectif des travaux de la thèse est de fournir des facilitateurs permettant une mise en œuvre de mécanismes efficaces de collecte, de fusion et d’agrégation de données, et par extension de prise de décisions. L’ontologie de haut niveau proposée sera quant à elle pourvue de tous les attributs permettant une représentation, une gestion et une composition des ‘capteurs, actuateurs et objets’ basées sur des architectures orientées services (Service Oriented Architecture ou SOA). Cette ontologie devrait aussi permettre la prise en compte de l’information transporter (sémantique) dans les mécanismes de routage (i.e. routage basé information). Les aspects liés à l’optimisation et à la modélisation constitueront aussi une des composantes fortes de cette thèse. Les problématiques à résoudre pourraient être notamment : • La proposition du langage de description le mieux adapté (compromis entre richesse, complexité et flexibilité), • La définition de la structure optimum de l’architecture de découverte et de gestion d’un capteur/actuateur, • L’identification d’une solution optimum au problème de la collecte à grande échelle des données de capteurs/actuateurs
The constant evolution of technology in terms of inexpensive and embedded wireless interfaces and powerful chipsets has leads to the massive usage and development of wireless sensor networks (WSNs). This potentially affects all aspects of our lives ranging from home automation (e.g. Smart Buildings), passing through e-Health applications, environmental observations and broadcasting, food sustainability, energy management and Smart Grids, military services to many other applications. WSNs are formed of an increasing number of sensor/actuator/relay/sink devices, generally self-organized in clusters and domain dedicated, that are provided by an increasing number of manufacturers, which leads to interoperability problems (e.g., heterogeneous interfaces and/or grounding, heterogeneous descriptions, profiles, models …). Moreover, these networks are generally implemented as vertical solutions not able to interoperate with each other. The data provided by these WSNs are also very heterogeneous because they are coming from sensing nodes with various abilities (e.g., different sensing ranges, formats, coding schemes …). To tackle this heterogeneity and interoperability problems, these WSNs’ nodes, as well as the data sensed and/or transmitted, need to be consistently and formally represented and managed through suitable abstraction techniques and generic information models. Therefore, an explicit semantic to every terminology should be assigned and an open data model dedicated for WSNs should be introduced. SensorML, proposed by OGC in 2010, has been considered an essential step toward data modeling specification in WSNs. Nevertheless, it is based on XML schema only permitting basic hierarchical description of the data, hence neglecting any semantic representation. Furthermore, most of the researches that have used semantic techniques for developing their data models are only focused on modeling merely sensors and actuators (this is e.g. the case of SSN-XG). Other researches dealt with data provided by WSNs, but without modelling the data type, quality and states (like e.g. OntoSensor). That is why the main aim of this thesis is to specify and formalize an open data model for WSNs in order to mask the aforementioned heterogeneity and interoperability between different systems and applications. This model will also facilitate the data fusion and aggregation through an open management architecture like environment as, for example, a service oriented one. This thesis can thus be split into two main objectives: 1)To formalize a semantic open data model for generically describing a WSN, sensors/actuators and their corresponding data. This model should be light enough to respect the low power and thus low energy limitation of such network, generic for enabling the description of the wide variety of WSNs, and extensible in a way that it can be modified and adapted based on the application. 2)To propose an upper service model and standardized enablers for enhancing sensor/actuator discovery, data fusion, data aggregation and WSN control and management. These service layer enablers will be used for improving the data collection in a large scale network and will facilitate the implementation of more efficient routing protocols, as well as decision making mechanisms in WSNs
APA, Harvard, Vancouver, ISO, and other styles
5

Naert, Lucie. "Capture, annotation and synthesis of motions for the data-driven animation of sign language avatars." Thesis, Lorient, 2020. http://www.theses.fr/2020LORIS561.

Full text
Abstract:
Cette thèse porte sur la capture, l'annotation, la synthèse et l'évaluation des mouvements des mains et des bras pour l'animation d'avatars communiquant en Langues des Signes (LS). Actuellement, la production et la diffusion de messages en LS dépendent souvent d'enregistrements vidéo qui manquent d'informations de profondeur et dont l’édition et l'analyse sont difficiles. Les avatars signeurs constituent une alternative prometteuse à la vidéo. Ils sont généralement animés soit à l'aide de techniques procédurales, soit par des techniques basées données. L'animation procédurale donne souvent lieu à des mouvements peu naturels, mais n'importe quel signe peut être produit avec précision. Avec l’animation basée données, les mouvements de l'avatar sont réalistes mais la variété des signes pouvant être synthétisés est limitée et/ou biaisée par la base de données initiale. Privilégiant l’acceptation de l’avatar, nous avons choisi l'approche basée sur les données mais, pour remédier à sa principale limitation, nous proposons d'utiliser les mouvements annotés présents dans une base de mouvements de LS capturés pour synthétiser de nouveaux signes et énoncés absents de cette base. Pour atteindre cet objectif, notre première contribution est la conception, l'enregistrement et l'évaluation perceptuelle d'une base de données de capture de mouvements en Langue des Signes Française (LSF) composée de signes et d'énoncés réalisés par des enseignants sourds de LSF. Notre deuxième contribution est le développement de techniques d'annotation automatique pour différentes pistes d’annotation basées sur l'analyse des propriétés cinématiques de certaines articulations et des algorithmes d'apprentissage automatique existants. Notre dernière contribution est la mise en œuvre de différentes techniques de synthèse de mouvements basées sur la récupération de mouvements par composant phonologique et sur la reconstruction modulaire de nouveaux contenus de LSF avec l'utilisation de techniques de génération de mouvements, comme la cinématique inverse, paramétrées pour se conformer aux propriétés des mouvements réels
This thesis deals with the capture, annotation, synthesis and evaluation of arm and hand motions for the animation of avatars communicating in Sign Languages (SL). Currently, the production and dissemination of SL messages often depend on video recordings which lack depth information and for which editing and analysis are complex issues. Signing avatars constitute a powerful alternative to video. They are generally animated using either procedural or data-driven techniques. Procedural animation often results in robotic and unrealistic motions, but any sign can be precisely produced. With data-driven animation, the avatar's motions are realistic but the variety of the signs that can be synthesized is limited and/or biased by the initial database. As we considered the acceptance of the avatar to be a prime issue, we selected the data-driven approach but, to address its main limitation, we propose to use annotated motions present in an SL Motion Capture database to synthesize novel SL signs and utterances absent from this initial database. To achieve this goal, our first contribution is the design, recording and perceptual evaluation of a French Sign Language (LSF) Motion Capture database composed of signs and utterances performed by deaf LSF teachers. Our second contribution is the development of automatic annotation techniques for different tracks based on the analysis of the kinematic properties of specific joints and existing machine learning algorithms. Our last contribution is the implementation of different motion synthesis techniques based on motion retrieval per phonological component and on the modular reconstruction of new SL content with the additional use of motion generation techniques such as inverse kinematics, parameterized to comply to the properties of real motions
APA, Harvard, Vancouver, ISO, and other styles
6

Boberg, Molly, and Märta Selander. "Systematic and Automatized Hydrogeological Data Capturing for Provision of Safe Drinking Water in Daudkandi, Bangladesh." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297811.

Full text
Abstract:
Arsenic-contaminated drinking water exposes ~230 million people worldwide to increased risks of several diseases and is considered one of the greatest threats to public health. In Bangladesh, arsenic-contaminated water has been declared the largest poisoning of a population in history, where 39 million people are exposed to arsenic levels above the WHO guidelines (>10 μg/L). Drinking water is mainly provided by tube-wells installed by local drillers and the majority are located in aquifers with high arsenic levels. The major challenges of identifying arsenic-safe aquifers consist of a lack of a common tool for quality assurance of hydrogeological data, post-processing of the data, and a possibility to forward analyzed data to national and local stakeholders. Therefore, the purpose of this study was to investigate the potential of applying a digital solution for collecting and managing hydrogeological data in a quality assured platform. This study was a pilot-project in the sub-district Daudkandi, Bangladesh in collaboration with the KTH-International Groundwater Research Group. To fulfill the purpose, a method was developed for systematic and automated data capturing of hydrogeological information in GeoGIS, an advanced software that proved to be an efficient tool for visualizing hydrogeological data. The results show that collecting a few field data in a systematic and automated way is helpful for interpreting aquifer sequences and will enable better prerequisites for targeting safe aquifers and installing safe tube-wells. Conclusions are that the integration of a digital platform as a decision tool may significantly improve arsenic mitigation strategies. Furthermore, providing information to public and private sectors in Bangladesh would increase the transparency of hydrogeological conditions and may help improve safe water access to high arsenic areas of Bangladesh.
Över 230 miljoner människor världen över exponeras dagligen för arsenik-förorenat dricksvatten vilket kan ge upphov till hjärt- och kärlsjukdomar, diabetes samt olika cancersjukdomar. Arsenik (As) är en extremt giftig halvmetall som är naturligt förekommande i grundvatten och klassas som ett utav de största hoten mot allmän folkhälsa, vilket gör reducerande åtgärder till en samhällsutmaning av global karaktär. Ett land som är hårt drabbat av höga arsenikhalter är Bangladesh, där miljontals människor utsätts för arsenik-nivåer som överstiger WHO:s rekommenderade riktlinjer (>10 μg/L). Dricksvattenförsörjningen tillhandahålls framförallt genom vattenbrunnar installerade av lokala borrare och där majoriteten är placerade i akviferer med skadligt höga arsenikhalter.  Utmaningarna med att identifiera arseniksäkra akviferer är flera, bland annat saknas ett gemensamt verktyg för att hantera, kvalitetssäkra och analysera hydrogeologisk data, samt för att delge denna till olika parter på lokal, regional och nationell nivå. Syftet med den här studien var således att undersöka potentialen i att tillämpa ett digitalt verktyg för insamling och hantering av fältdata från olika databaser till en kvalitetssäkrad plattform. Studien genomfördes som ett pilotprojekt i distriktet Daudkandi, Bangladesh i samarbete med forskningsgruppen KTH-International Groundwater Research Group. För att uppfylla syftet utvecklades en metod för systematisk och automatiserad datainsamling av hydrogeologisk information i GeoGIS, en avancerad mjukvara som visade sig vara ett effektivt verktyg för visualiseringar av hydrogeologiska data. Resultaten visar att insamling av en liten mängd fältdata är till stor hjälp för att tolka akvifersekvenser samt för att urskilja arseniksäkra akviferer, vilket skapar bättre förutsättningar för installation av säkra vattenbrunnar. En slutsats som dras är att integreringen av en digital plattform för datainsamling avsevärt kan förbättra beslutsfattandet för arsenikreducerande strategier samt underlättar ett transparent informationsflöde. Genom att tillhandahålla transparent hydrogeologisk information till privat och offentlig sektor i Bangladesh kan även tillgången på säkert dricksvatten förbättras.
APA, Harvard, Vancouver, ISO, and other styles
7

Cecchinel, Cyril. "DEPOSIT : une approche pour exprimer et déployer des politiques de collecte sur des infrastructures de capteurs hétérogènes et partagées." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4094/document.

Full text
Abstract:
Les réseaux de capteurs sont utilisés dans l’IoT pour collecter des données. Cependant, une expertise envers les réseaux de capteurs est requise pour interagir avec ces infrastructures. Pour un ingénieur logiciel, cibler de tels systèmes est difficile. Les spécifications des plateformes composant l'infrastructure de capteurs les obligent à travailler à un bas niveau d'abstraction et à utiliser des plateformes hétérogènes. Cette fastidieuse activité peut conduire à un code exploitant de manière non optimisée l’infrastructure. En étant spécifiques à une infrastructure, ces applications ne peuvent également pas être réutilisées facilement vers d’autres infrastructures. De plus, le déploiement de ces applications est hors du champ de compétences d’un ingénieur logiciel car il doit identifier la ou les plateforme(s) requise(s) pour supporter l’application. Enfin, l’architecture peut ne pas être conçue pour supporter l’exécution simultanée d’application, engendrant des déploiements redondants lorsqu’une nouvelle application est identifiée. Dans cette thèse, nous présentons une approche qui supporte (i) la définition de politiques de collecte de données à haut niveau d’abstraction et réutilisables, (ii) leur déploiement sur une infrastructure hétérogène dirigée par des modèles apportés par des experts réseau et (iii) la composition automatique de politiques sur des infrastructures hétérogènes. De ces contributions, un ingénieur peut dès lors manipuler un réseau de capteurs sans en connaitre les détails, en réutilisant des abstractions architecturales disponibles lors de l'expression des politiques, des politiques qui pourront également coexister au sein d'un même réseau
Sensing infrastructures are classically used in the IoT to collect data. However, a deep knowledge of sensing infrastructures is needed to properly interact with the deployed systems. For software engineers, targeting these systems is tedious. First, the specifies of the platforms composing the infrastructure compel them to work with little abstractions and heterogeneous devices. This can lead to code that badly exploit the network infrastructure. Moreover, by being infrastructure specific, these applications cannot be easily reused across different systems. Secondly, the deployment of an application is outside the domain expertise of a software engineer as she needs to identify the required platform(s) to support her application. Lastly, the sensing infrastructure might not be designed to support the concurrent execution of various applications leading to redundant deployments when a new application is contemplated. In this thesis we present an approach that supports (i) the definition of data collection policies at high level of abstraction with a focus on their reuse, (ii) their deployment over a heterogeneous infrastructure driven by models designed by a network export and (iii) the automatic composition of the policy on top of the heterogeneous sensing infrastructures. Based on these contributions, a software engineer can exploit sensor networks without knowing the associated details, while reusing architectural abstractions available off-the-shelf in their policy. The network will also be shared automatically between the policies
APA, Harvard, Vancouver, ISO, and other styles
8

Neumann, Markus. "Automatic multimodal real-time tracking for image plane alignment in interventional Magnetic Resonance Imaging." Phd thesis, Université de Strasbourg, 2014. http://tel.archives-ouvertes.fr/tel-01038023.

Full text
Abstract:
Interventional magnetic resonance imaging (MRI) aims at performing minimally invasive percutaneous interventions, such as tumor ablations and biopsies, under MRI guidance. During such interventions, the acquired MR image planes are typically aligned to the surgical instrument (needle) axis and to surrounding anatomical structures of interest in order to efficiently monitor the advancement in real-time of the instrument inside the patient's body. Object tracking inside the MRI is expected to facilitate and accelerate MR-guided interventions by allowing to automatically align the image planes to the surgical instrument. In this PhD thesis, an image-based workflow is proposed and refined for automatic image plane alignment. An automatic tracking workflow was developed, performing detection and tracking of a passive marker directly in clinical real-time images. This tracking workflow is designed for fully automated image plane alignment, with minimization of tracking-dedicated time. Its main drawback is its inherent dependence on the slow clinical MRI update rate. First, the addition of motion estimation and prediction with a Kalman filter was investigated and improved the workflow tracking performance. Second, a complementary optical sensor was used for multi-sensor tracking in order to decouple the tracking update rate from the MR image acquisition rate. Performance of the workflow was evaluated with both computer simulations and experiments using an MR compatible testbed. Results show a high robustness of the multi-sensor tracking approach for dynamic image plane alignment, due to the combination of the individual strengths of each sensor.
APA, Harvard, Vancouver, ISO, and other styles
9

Romano, Regiane Relva. "Os impactos do uso de tecnologia da informação e da identificação e captura automática de dados nos processos operacionais do varejo." reponame:Repositório Institucional do FGV, 2011. http://hdl.handle.net/10438/8895.

Full text
Abstract:
Submitted by Regiane Relva Romano (regiane@vip-systems.com.br) on 2011-12-28T12:03:36Z No. of bitstreams: 1 Tese Regiane Relva Romano - dezembro 2011-Versao Final.pdf: 4192254 bytes, checksum: 786a11620fac456f482835d77b815ce8 (MD5)
Rejected by Gisele Isaura Hannickel (gisele.hannickel@fgv.br), reason: Prezada Regiane, Está pendente a capa e a ficha catalográfica. Favor retirar o logotipo das primeiras folhas. Segue a sequencia: 1º capa 2º contra capa (que na sua postagem está como 1ª folha) 3º ficha catalográfica 4º folha de assinaturas 5º sequencia do trabalho..... Em caso de dúvidas favor verificar no site da biblioteca / serviços / manuais / normalização de trabalhos academicos. Att, Secretaria de Registro on 2011-12-28T12:09:43Z (GMT)
Submitted by Regiane Relva Romano (regiane@vip-systems.com.br) on 2012-01-04T01:31:23Z No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5)
Approved for entry into archive by Gisele Isaura Hannickel (gisele.hannickel@fgv.br) on 2012-01-04T11:12:52Z (GMT) No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5)
Made available in DSpace on 2012-01-04T11:16:03Z (GMT). No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5) Previous issue date: 2011-12-09
Este trabalho objetivou identificar as principais tecnologias disponíveis de TI (Tecnologia da Informação) e de AIDC (Identificação e Captura Automática de Dados) para a área de varejo de autosserviço, para preencher a lacuna existente na literatura, sobre os benefícios de se usar novas tecnologias no ponto de venda, com vistas a otimizar sua operação. Para tanto, foram estudados os principais processos operacionais de uma loja de varejo de autosserviço, com vistas a identificar como as Tecnologias da Informação (TI) e de Identificação e Captura Automática de Dados (AIDC), poderiam ajudar a melhorar os resultados operacionais e agregar valor ao negócio. Para analisar suas proposições (de que o uso de TI e de AIDC podem ajudar na: redução dos tempos dos processos de retaguarda; redução do número de operações no ponto de venda; prevenção de perdas; redução dos custos e dos tempos para a realização dos inventários; redução do número de funcionários nas lojas; redução do tempo de fila no caixa; redução de rupturas e no aumento da eficiência operacional da loja), foram pesquisados diversos estudos de casos mundiais de empresas do segmento de varejo, que implementaram as tecnologias de AIDC e TI, principalmente a de RFID, para saber quais foram os impactos do uso destas tecnologias em suas operações e, em seguida, foi desenvolvido um Estudo de Caso abrangente, por meio do qual se objetivou entender os benefícios empresariais reais do uso destas tecnologias para o varejo de autosserviço. Como resultado final, foi possível identificar as mudanças nos processos operacionais do varejo de autosserviço, bem como os benefícios gerados em termos de custo, produtividade, qualidade, flexibilidade e inovação. O trabalho também evidenciou os pontos críticos de sucesso para a implementação da TI e das AIDC no varejo, que são: a revisão dos processos operacionais; a correta definição do hardware; dos insumos; do software; das interferências do ambiente físico; da disponibilização dos dados/informações dos produtos; das pessoas/funcionários e dos parceiros de negócios/fornecedores. De maneira mais específica, este trabalho buscou contribuir para o enriquecimento do campo de estudos no segmento de varejo e para o uso da tecnologia da informação, no Brasil, já que o assunto sobre o uso e o impacto de novas tecnologias no ponto de vendas, ainda permanece pouco explorado academicamente.
This study sought to identify the main IT technologies available for the AIDC and retail self-service area, to fill the gap in the literature about the real advantages of using new technologies at the point of sale, in order to optimize its operation. In order to do this, we studied the main operational processes of a self-service retail store bearing in mind to identify how the technologies of Automatic Identification and Data Capture and IT could help to improve the operating results and add value to the business. To analyze these proposals we have surveyed several global case studies of retail companies, which implemented the AIDC and IT technologies to investigate what were the impacts of using these technologies in their operations and then designed a comprehensive and innovative Case Study, through which we sought to understand the real business benefits. As a final result, it was possible to identify the changes and the benefits in terms of cost, productivity, quality, flexibility and innovation. The work also highlighted the critical points of success for the implementation of AIDC and IT Retail, which are: the review of operating processes, the correct definition of the hardware; inputs; software; interferences of the physical environment, the availability of data / information of products, of people / employees and of business partners / suppliers. More specifically, this study sought to contribute to the enrichment of the field studies in the retail segment and for the use of information technology in Brazil, since the issue on the use and impact of new technologies at the point of sales, still remains unexplored academically.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Shuting. "Navigation of a quad-rotor to access the interior of a building." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2237.

Full text
Abstract:
Ce travail de recherche est dédié à l’élaboration d’une stratégie de navigation autonome qui comprend la génération d’une trajectoire optimale en évitant des obstacles, la détection de l’objet d’intérêt spécifique (i.e. une fenêtre) et puis l’exécution de la manoeuvre postérieure à approcher la fenêtre et enfin accéder à l’intérieur du bâtiment. Le véhicule est navigué par un système de vision et une combinaison de capteurs inertiels et d’altitude, ce qui réalise une localisation relative du quadri-rotor par rapport à son environment. Une méthode de planification de trajectoire basée sur Model Predictive Control (MPC), qui utilise les informations fournies par le GPS et le capteur visuel, a été conçue pour générer une trajectoire optimale en temps réel avec des capacités d’évitement de collision, qui commence à partir d’un point initial donné par l’utilisateur et guide le véhicule pour atteindre le point final à l’extérieur du bâtiment de la cible. Dans le but de détecter et de localiser l’objet d’intérêt, deux stratégies de détection d’objet basées sur la vision sont proposées et sont respectivement appliquées dans le système de stéréo vision et le système de vision en utilisant la Kinect. Après l’estimation du modèle de la fenêtre cible, un cadre d’estimation de mouvement est conçu pour estimer ego-mouvement du véhicule à partir des images fournies par le capteur visuel. Il y a eu deux versions des cadres d’estimation de mouvement pour les deux systèmes de vision. Une plate-forme expérimentale de quad-rotor est développée. Pour l’estimation de la dynamique de translation du véhicule, un filtre de Kalman est mis en œuvre pour combiner les capteurs d’imagerie, inertiels et d’altitude. Un système de détection et de contrôle hiérarchique est conçu pour effectuer la navigation et le contrôle de l’hélicoptère quadri-rotor, ce qui permet au véhicule d’estimer l’état sans marques artificielles ou d’autres systèmes de positionnement externes
This research work is dedicated to the development of an autonomous navigation strategy which includes generating an optimal trajectory with obstacles avoiding capabilities, detecting specific object of interest (i.e. a window) and then conducting the subsequent maneuver to approach the window and finally access into the building. The vehicle is navigated by a vision system and a combination of inertial and altitude sensors, which achieve a relative localization of the quad-rotor with respect to its surrounding environment. A MPC-based path planning method using the information provided by the GPS and the visual sensor has been developed to generate an optimal real-time trajectory with collision avoidance capabilities, which starts from an initial point given by the user and guides the vehicle to achieve the final point outside the target building. With the aim of detecting and locating the object of interest, two different vision-based object detection strategies are proposed and are applied respectively in the stereo vision system and the vision system using the Kinect. After estimating the target window model, a motion estimation framework is developed to estimate the vehicle’s ego-motion from the images provided by the visual sensor. There have been two versions of the motion estimation frameworks for both vision systems. A quad-rotor experimental platform is developed. For estimating the translational dynamic of the vehicle, a Kalman filter is implemented to combine the imaging, inertial and altitude sensors. A hierarchical sensing and control system is designed to perform the navigation and control of the quad-rotor helicopter, which allows the vehicle to estimate the state without artificial marks or other external positioning systems
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Automated Data Capture"

1

Canada. Dept. of the Environment. Inland Waters. Automated Comoputer-Based Water Quality Analytical Laboratory Data Capture/Management System (Awqualabs). S.l: s.n, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Waters, Canada Dept of the Environment Inland. Experiences Gained in System Design, Development, and Implementation of an Automated, Computer-Based, Water Quality Analytical Laboratory Data Capture/Management System. S.l: s.n, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Directorate, Canada Inland Waters. Experiences gained in system design, development, and implementation of an automated, computer-based, water quality analytical laboratory data capture/management system. Burlington, Ont: Environment Canada, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Suttle, I. V. Assessment of potential benefits of automatic identification and other methods of data capture in distribution automation. Manchester: UMIST, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Inc, ebrary, ed. JBoss drools business rules: Capture, automate, and reuse your business processes in a clear English language that your computer can understand. Birmingham, U.K: Packt Pub., 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Spencer, Harvey. Automated Forms Processing: Tips and Techniques to Automate the Capture of Data from Your Forms. CMP Books, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Automated Forms Processing: A Primer : How to Capture Paper Forms Electronically and Extract the Data Automatically. 2nd ed. CMP Books, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

executive, Health and safety. Automatic Data Capture Opportunities for Health and Safety in Industry. Health and Safety Executive (HSE), 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Automated Data Capture"

1

Camargo, Manuel, Marlon Dumas, and Oscar González-Rojas. "Learning Accurate Business Process Simulation Models from Event Logs via Automated Process Discovery and Deep Learning." In Advanced Information Systems Engineering, 55–71. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-07472-1_4.

Full text
Abstract:
AbstractBusiness process simulation is a well-known approach to estimate the impact of changes to a process with respect to time and cost measures – a practice known as what-if process analysis. The usefulness of such estimations hinges on the accuracy of the underlying simulation model. Data-Driven Simulation (DDS) methods leverage process mining techniques to learn process simulation models from event logs. Empirical studies have shown that, while DDS models adequately capture the observed sequences of activities and their frequencies, they fail to accurately capture the temporal dynamics of real-life processes. In contrast, generative Deep Learning (DL) models are better able to capture such temporal dynamics. The drawback of DL models is that users cannot alter them for what-if analysis due to their black-box nature. This paper presents a hybrid approach to learn process simulation models from event logs wherein a (stochastic) process model is extracted via DDS techniques, and then combined with a DL model to generate timestamped event sequences. An experimental evaluation shows that the resulting hybrid simulation models match the temporal accuracy of pure DL models, while partially retaining the what-if analysis capability of DDS approaches.
APA, Harvard, Vancouver, ISO, and other styles
2

Zünd, Daniel, and Luís M. A. Bettencourt. "Street View Imaging for Automated Assessments of Urban Infrastructure and Services." In Urban Informatics, 29–40. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8983-6_4.

Full text
Abstract:
AbstractMany forms of ambient data in cities are starting to become available that allows tracking of short-term urban operations, such as traffic management, trash collections, inspections, or non-emergency maintenance requests. However, arguably the greatest promise of urban analytics is to set up measurable objectives and track progress toward systemic development goals connected to human development and sustainability over the longer term. The challenge for such an approach is the connection between new technological capabilities, such as sensing and machine learning and local knowledge, and operations of residents and city governments. Here, we describe an emerging project for the long-term monitoring of sustainable development in fast-growing towns in the Galapagos Islands through the convergence of these methods. We demonstrate how collaborative mapping and the capture of 360-degree street views can produce a general basis for a broad set of quantitative analytics, when such actions are coupled to mapping and deep-learning characterizations of urban environments. We map and assess the precision of urban assets via automatic object classification and characterize their abundance and spatial heterogeneity. We also discuss how these methods, as they continue to improve, can provide the means to perform an ambient census of urban assets (buildings, vehicles, services) and environmental conditions.
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Hao, Zhenjiang Miao, Feiyue Zhu, Gang Zhang, and Song Li. "Automatic Labanotation Generation Based on Human Motion Capture Data." In Communications in Computer and Information Science, 426–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45646-0_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Zhidong, Hammadi Nait-Charif, and Jian J. Zhang. "Automatic Estimation of Skeletal Motion from Optical Motion Capture Data." In Motion in Games, 144–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89220-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bollig, Benedikt. "An Automaton over Data Words That Captures EMSO Logic." In CONCUR 2011 – Concurrency Theory, 171–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23217-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chacko, Anu Mary, Alfredo Cuzzocrea, and S. D. Madhu Kumar. "Automatic Big Data Provenance Capture at Middleware Level in Advanced Big Data Frameworks." In Connected Environments for the Internet of Things, 219–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70102-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Xiaoyue, Shibiao Xu, Wujun Che, and Xiaopeng Zhang. "Automatic Motion Generation Based on Path Editing from Motion Capture Data." In Transactions on Edutainment IV, 91–104. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14484-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hagn, Korbinian, and Oliver Grau. "Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation." In Deep Neural Networks and Data for Automated Driving, 127–47. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4_4.

Full text
Abstract:
AbstractSynthetic, i.e., computer-generated imagery (CGI) data is a key component for training and validating deep-learning-based perceptive functions due to its ability to simulate rare cases, avoidance of privacy issues, and generation of pixel-accurate ground truth data. Today, physical-based rendering (PBR) engines simulate already a wealth of realistic optical effects but are mainly focused on the human perception system. Whereas the perceptive functions require realistic images modeled with sensor artifacts as close as possible toward the sensor, the training data has been recorded. This chapter proposes a way to improve the data synthesis process by application of realistic sensor artifacts. To do this, one has to overcome the domain distance between real-world imagery and the synthetic imagery. Therefore, we propose a measure which captures the generalization distance of two distinct datasets which have been trained on the same model. With this measure the data synthesis pipeline can be improved to produce realistic sensor-simulated images which are closer to the real-world domain. The proposed measure is based on the Wasserstein distance (earth mover’s distance, EMD) over the performance metric mean intersection-over-union (mIoU) on a per-image basis, comparing synthetic and real datasets using deep neural networks (DNNs) for semantic segmentation. This measure is subsequently used to match the characteristic of a real-world camera for the image synthesis pipeline which considers realistic sensor noise and lens artifacts. Comparing the measure with the well-established Fréchet inception distance (FID) on real and artificial datasets demonstrates the ability to interpret the generalization distance which is inherent asymmetric and more informative than just a simple distance measure. Furthermore, we use the metric as an optimization criterion to adapt a synthetic dataset to a real dataset, decreasing the EMD distance between a synthetic and the Cityscapes dataset from 32.67 to 27.48 and increasing the mIoU of our test algorithm () from 40.36 to $$47.63\%$$ 47.63 % .
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Wei, Shaoxiong Ji, Erik Cambria, and Pekka Marttinen. "Multitask Recalibrated Aggregation Network for Medical Code Prediction." In Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track, 367–83. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86514-6_23.

Full text
Abstract:
AbstractMedical coding translates professionally written medical reports into standardized codes, which is an essential part of medical information systems and health insurance reimbursement. Manual coding by trained human coders is time-consuming and error-prone. Thus, automated coding algorithms have been developed, building especially on the recent advances in machine learning and deep neural networks. To solve the challenges of encoding lengthy and noisy clinical documents and capturing code associations, we propose a multitask recalibrated aggregation network. In particular, multitask learning shares information across different coding schemes and captures the dependencies between different medical codes. Feature recalibration and aggregation in shared modules enhance representation learning for lengthy notes. Experiments with a real-world MIMIC-III dataset show significantly improved predictive performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Belcore, Elena, Vincenzo Di Pietra, Nives Grasso, Marco Piras, Francesco Tondolo, Pierclaudio Savino, Daniel Rodriguez Polania, and Anna Osello. "Towards a FOSS Automatic Classification of Defects for Bridges Structural Health Monitoring." In Communications in Computer and Information Science, 298–312. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94426-1_22.

Full text
Abstract:
AbstractBridges are among the most important structures of any road network. During their service life, they are subject to deterioration which may reduce their safety and functionality. The detection of bridge damage is necessary for proper maintenance activities. To date, assessing the health status of the bridge and all its elements is carried out by identifying a series of data obtained from visual inspections, which allows the mapping of the deterioration situation of the work and its conservation status. There are, however, situations where visual inspection may be difficult or impossible, especially in critical areas of bridges, such as the ceiling and corners. In this contribution, the authors acquire images using a prototype drone with a low-cost camera mounted upward over the body of the drone. The proposed solution was tested on a bridge in the city of Turin (Italy). The captured data was processed via photogrammetric process using the open-source Micmac solution. Subsequently, a procedure was developed with FOSS tools for the segmentation of the orthophoto of the intrados of the bridge and the automatic classification of some defects found on the analyzed structure. The paper describes the adopted approach showing the effectiveness of the proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Automated Data Capture"

1

"Specifying and Analyzing Workflows for Automated Identification and Data Capture." In 2009 42nd Hawaii International Conference on System Sciences. IEEE, 2009. http://dx.doi.org/10.1109/hicss.2009.402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bhowmik, Tanmay, Austin Reid Thompson, Anh Quoc Do, and Nan Niu. "Automated Support to Capture Environment Assertions for Requirements-Based Testing." In 2021 IEEE 22nd International Conference on Information Reuse and Integration for Data Science (IRI). IEEE, 2021. http://dx.doi.org/10.1109/iri51335.2021.00023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jordan, Colin Lyle, Roozbeh Koochak, Martin Roberts, Ajay Nalonnil, and Mike Honeychurch. "A Holistic Approach to Big Data and Data Analytics for Automated Reservoir Surveillance and Analysis." In SPE Asia Pacific Oil & Gas Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210757-ms.

Full text
Abstract:
Abstract Analyses have been widely applied in production forecasting of oil/gas production in both conventional and unconventional reservoirs. In order to forecast production, traditional regression and machine learning approaches have been applied to various reservoir analysis methods. Nevertheless, these methods are still suboptimal in detecting similar production trends in different wells due to data artifacts (noise, data scatter, outliers) that obscure the reservoir signal and leading to large forecast error, or fail due to lack of data access (inadequate SCADA systems, missing or abhorrent data, and much more). Furthermore, without proper and complete integration into a data system, discipline silos still exist reducing the efficiency of automation. This paper describes a recent field trial conducted in Australia's Cooper Basin with the objective to develop a completely automated end-to-end system in which data are captured directly from the field/SCADA system, automatically imported/processed, and finally analyzed entirely in automated system using modern computing languages, modern devices incl. IoT, as well as advanced data science and machine learning methods. This was a multidisciplinary undertaking requiring expertise from petroleum, computing/programming, and data science disciplines. The back-end layer was developed using Wolfram's computation engine, run from an independent server in Australia, while the front-end graphical user interface (GUI) was developed using a combination of Wolfram Language, Java, and JavaScript – all later switched to a Python-React combination after extensive testing. The system was designed to simultaneously capture data real-time from SCADA Historians, IIoT devices, and remote databases for automatic processing and analysis through API's. Automatic processing included "Smart Filtering" using apparent Productivity Index and similar methods. Automated analysis, including scenario analysis, was performed using customized M/L and statistical methods which are then applied to Decline curve analysis (DCA), flowing material balance analysis (FMB), and Water-Oil-Ratio (WOR). The entire procedure is automated, without need for any human intervention.
APA, Harvard, Vancouver, ISO, and other styles
4

Chamberlain, Daniel, Adrian Jimenez-Galindo, Richard Ribón Fletcher, and Rahul Kodgule. "Applying Augmented Reality to Enable Automated and Low-Cost Data Capture from Medical Devices." In ICTD '16: Eighth International Conference on Information and Communication Technologies and Development. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2909609.2909626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Swanson, Matthew, Eric Johnson, and Alexander Stoytchev. "Automated Weld Integrity Analysis Using 3D Point Data." In ASME 2010 World Conference on Innovative Virtual Reality. ASMEDC, 2010. http://dx.doi.org/10.1115/winvr2010-3769.

Full text
Abstract:
This paper describes a method for non-destructive evaluation of the quality of welds from 3D point data. The method uses a stereo camera system to capture high-resolution 3D images of deposited welds, which are then processed in order to extract key parameters of the welds. These parameters (the weld angle and the radius of the weld at the weld toe) can in turn be used to estimate the stress concentration factor of the weld and thus to infer its quality. The method is intended for quality control applications in manufacturing environments and aims to supplement, and even eliminate, the manual inspections which are currently the predominant inspection method. Experimental results for T-fillet welds are reported.
APA, Harvard, Vancouver, ISO, and other styles
6

Sung, Raymond C. W., James M. Ritchie, Theodore Lim, Aparajithan Sivanathan, and Mike J. Chantler. "The Evaluation of a Virtual-Aided Design Engineering Review (VADER) System for Automated Knowledge Capture and Reuse." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12030.

Full text
Abstract:
Conducting knowledge capture and embedding it into a products’ through lifecycle remains a key issue in engineering industries; particularly with regard to rationale associated knowledge emanating during formal design reviews. Manual, and often interruptive, methods with associated costly overheads, exacerbate the already time consuming process. As well as these disadvantages, manual methods can potentially capture the wrong data due to human error or not fully-capturing all the pertinent information and associated relationships. Consequently, industries are seeking automated engineering knowledge capture and rationale that adds value to product and processes, potentially reaping the benefits of time and cost. Previous work by the authors proved how user-logging in virtual environments aid unobtrusive capture of engineering knowledge and rationale in design tasks. This paper advances the work further through a Virtual Aided Design Engineering Review (VADER) system developed to automatically and unobtrusively capture both multimodal human-computer and human-human interactivity during design reviews via the synchronous time-phased logging of software interactions, product models, audio, video and input devices. By processing the captured data review reports and records can be automatically generated as well as allowing fast knowledge retrieval. The backbone of VADER is a multimodal device and data fusion architecture to capture and synchronise structured and unstructured data in realtime. Visualisation is through a 3D virtual environment. In addition to allowing engineers to visualise and annotate 3D design models, the system provides a timeline interface to search and visualise the captured decisions from a design review. The VADER system has been put through its initial industrial trial and reported herein. Objective and subjective analysis indicate the VADER system is intuitive to use and can lead to savings in both time and cost with regard to project reviews.
APA, Harvard, Vancouver, ISO, and other styles
7

Mohamad, Hamdi, Felicity Anai Anak Michael Mulok, Douwe Franssens, Nurfitrah Mat Noh, Diego Patino, and Janna Tiong Mang Ing. "Innovative Automated Data Driven Daily Drilling Reporting Using Automated Data-Driven Models and a Digital Execution Platform." In IADC/SPE Asia Pacific Drilling Technology Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/209894-ms.

Full text
Abstract:
Abstract Currently, Automated Reporting leverages rig-sensors to produce ‘Activity’ and populated the Daily Drilling Report (DDR), replacing the labour-intensive manual entries. However, Surface and non-drilling activities cannot be detected in this way. The case study documents the complimentary use of a digital execution platform in filling these gaps. The automated daily drilling reporting solely on real-time rig sensors input causes a substantial number of non-drilling activities to be excluded and the data is not sufficient to produce solid 24-hour activities as required. Therefore, this paper presents a reporting solution that combines both real-time rig sensors input and activity tracking from a digital execution platform thus enabling the next level of reporting automation. Furthermore, the combination of these two sources ensures reporting accuracy and provides granularity for the next level of performance benchmarking. The paper documents the vision, methodology, implementation steps, challenges, and benefits of automating daily drilling reporting. The results of the case study were thoroughly discussed. The overall approach that was undertaken is straightforward where observation was conducted by identifying the similarity and differences of activities detected in the manual DDR and the improved automated reporting activities. The gap between the drilling activities (rig states) and non-drilling activities is corrected through a process of "cut and split" to capture the 24 hours activities. The planned activities were imported and monitored in the Digital Execution Platform, translated into WITSML (Wellsite Information Transfer Standard Markup Language) Drillreport object. Simultaneously, the real-time rig sensors data are available as WITSML log objects. DrillOps Report executes three tasks: Populate the sensor activities (Referred to as Automated Rig State Activity) by utilizing the "Fixed Text Remark" capability.Filter DrillReport object for actual activities on the rig marked as completed (Referred to as External Activity) by supervisors on the rig to populate all valid activities on rig.Overlapped activities in (1) and (2) will be cut and split accordingly where (1) supersedes the (2) as the single source of truth is the rig states detected by the rig sensors. On non-drilling days, (2) supersedes. This is referred to as Machine Activity Record (MAR). Other DDR information required is populated via FileBridge where the readily available information is parsed from the contractors' own reports into Automated Operational Reporting Solution. By utilizing the automated daily drilling reporting capabilities, rigsite users were able to reduce the time spent in capturing and entering the information required as part of the DDR. The rigsite personnel was then able to direct their attention on running daily data QA/QC prior to the daily report submission. This then allows them to put more focus on optimizing their wellsite operational performance and plan on any potential outcomes from the current activities. The structured data will enable post drilling actionable insights analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Bellistri, Domenico, Jeff Vinyard, and Cameron Seward. "Improved Inspection Through Full Matrix Capture Technology." In 2020 13th International Pipeline Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/ipc2020-9595.

Full text
Abstract:
Abstract The ideal Non Destructive Evaluation/Examination process should be carried out using a fully automated system: planning, execution, evaluation and reporting would be all tasks performed by automated technologies and artificial intelligence that don’t need human intervention, thus taking the Human Factor out of the equation. Not yet reaching this level of automation, it is still possible to improve the overall quality of the NDE process to increase both speed and quality, while maintaining safety. Ultrasonic Testing NDE is considered a safe, portable and reliable inspection method. Detection and characterization depends upon factors like technician’s experience and skills, especially for shear wave UT, where an amplitude vs time signal, the A-Scan, is the only given information from the specimen to be examined. With this technique there is very little digital data saved, often in the form of screenshots. The evolution of shear wave UT was represented by the use of an array probe with the Phased Array technique: the new capabilities of simultaneously steering the beam in multiple directions and combining all the A-Scans in images together, has made it easier to interpret any anomalies found during inspection. However, skill and experience are still very important factors to avoid false positive and false negative results. For example, focusing distance from the array probe and false signals from mode conversion may lead to the wrong conclusions. Ultrasonic imaging technology based on full matrix capture has the intrinsic ability to improve interpretation over more conventional techniques such as PA UT or shear wave UT since it provides information not only about the size of flaws, but also on their geometry in terms of horizontal and vertical extent.
APA, Harvard, Vancouver, ISO, and other styles
9

Mathá, Natalia, Konstantin Schekotihin, Matthias Bergner, Doriana Cobârzan, and Marco Hudelist. "Automated Labeling Infrastructure for Failure Analysis." In ISTFA 2022. ASM International, 2022. http://dx.doi.org/10.31399/asm.cp.istfa2022p0036.

Full text
Abstract:
Abstract The development of intelligent assistants helping Failure Analysis (FA) engineers in their daily work is essential to any digitalization strategy. In particular, these systems must solve various computer vision or natural language processing problems to select the most critical information from heterogeneous data, like images or texts, and present it to the users. Modern artificial intelligence (AI) techniques approach these tasks with machine learning (ML) methods. The latter, however, require large volumes of training data to create models to solve the required problems. In most cases, enterprise clouds store vast volumes of data captured while applying various FA methods. Nevertheless, this data is useless for ML training algorithms since it is stored in forms that can only be interpreted by highly-trained specialists. In this paper, we present an approach to embedding an annotation process in the everyday routines of FA engineers. Its services can easily be embedded in existing software solutions to (i) capture and store the semantics of each data piece in machine-readable form, as well as (ii) provide predictions of ML models trained on previously annotated data to simplify the annotation task. Preliminary experiments of the built prototype show that the extension of an image editor used by FA engineers with the services provided by the infrastructure can significantly simplify and speed up the annotation process.
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Manish Kumar, Humberto Parra, Obeida El Jundi, Houcine Ben Jeddou, Chakib Kada Kloucha, and Hussein Mustapha. "Using Machine Learning to Capture High-Permeability Streaks in Reservoir Models." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211661-ms.

Full text
Abstract:
Abstract Permeability modelling remains a major challenge in the reservoir modelling exercise. The main reason for this is the limited availability of measured input data and the effect of different geological processes on reservoir permeability. This leads to nonrepresentation of high-permeability streaks in the model. In this paper, we present a machine-learning (ML) driven approach that captures the permeability variation in the reservoir using available input data. In ML, clustering is an unsupervised approach aimed at automatically grouping data with similar properties. We use several clustering techniques to automatically identify high-permeability data points by dividing data into groups, also known as clusters, and then choosing the cluster with the maximum permeability and assigning it a new rock type. For each rock type, we fit and evaluate many ML regression models, and show their outperformance over traditional fitting approaches. Porosity and several openhole log properties are used as input for the regression models. By fixing porosity but varying the other properties, the variability of permeability values is predicted. Clustering, using ‘K-Means’ ML algorithm, resulted in an efficient approach of automated high permeability identification. Several ML models were trained and evaluated, and the models with the minimum error scores, namely mean square error (MSE) and R Squared (R2), were chosen for further predictions. Random Forest was within the top models for a variety of rock types. In general, complex curve fitting using ML outperformed traditional fitting approaches (i.e., straight line fitting) and demonstrated high potential for accurate, automated, high-permeability identification and integration. The predicted permeability has been calibrated with well test permeability data.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Automated Data Capture"

1

Berney, Ernest, Jami Lynn Daugherty, and Lulu Edwards. Validation of the automatic dynamic cone penetrometer. Engineer Research and Development Center (U.S.), July 2022. http://dx.doi.org/10.21079/11681/44704.

Full text
Abstract:
The U.S. military requires a rapid means of measuring subsurface soil strength for construction and repair of expeditionary pavement surfaces. Traditionally, a dynamic cone penetrometer (DCP) has served this purpose, providing strength with depth profiles in natural and prepared pavement surfaces. To improve upon this device, the Engineer Research and Development Center (ERDC) validated a new battery-powered automatic dynamic cone penetrometer (A-DCP) apparatus that automates the driving process by using a motor-driven hammering cap placed on top of a traditional DCP rod. The device improves upon a traditional DCP by applying three to four blows per second while digitally recording depth, blow count, and California Bearing Ratio (CBR). An integrated Global Positioning Sensor (GPS) and Bluetooth® connection allow for real-time data capture and stationing. Similarities were illustrated between the DCP and the A-DCP by generation of a new A-DCP calibration curve. This curve relates penetration rate to field CBR that nearly follows the DCP calibration with the exception of a slight offset. Field testing of the A-DCP showed less variability and more consistent strength measurement with depth at a speed five times greater than that of the DCP with minimal physical exertion by the operator.
APA, Harvard, Vancouver, ISO, and other styles
2

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography