Journal articles on the topic 'Automated Data Capture'

To see the other types of publications on this topic, follow the link: Automated Data Capture.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Automated Data Capture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Priestnall, G., R. E. Marston, and D. G. Elliman. "Arrowhead recognition during automated data capture." Pattern Recognition Letters 17, no. 3 (March 1996): 277–86. http://dx.doi.org/10.1016/0167-8655(95)00117-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Myler, H. R., and A. J. Gonzalez. "Automated design data capture using relaxation techniques." ACM SIGART Bulletin, no. 108 (April 1989): 169–70. http://dx.doi.org/10.1145/63266.63301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Asuncion, Hazeline U. "Automated data provenance capture in spreadsheets, with case studies." Future Generation Computer Systems 29, no. 8 (October 2013): 2169–81. http://dx.doi.org/10.1016/j.future.2013.04.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Damova, Mariana. "Linked Open Data Prototype of the Historical Archive of the European Commission." Archiving Conference 2020, no. 1 (April 7, 2020): 92–97. http://dx.doi.org/10.2352/issn.2168-3204.2020.1.0.92.

Full text
Abstract:
The European Cultural Heritage Strategy for the 21st century has led to an increased demand for fast, efficient and faithful 3D digitization technologies for cultural heritage artefacts. Yet, unlike the digital acquisition of cultural goods in 2D which is widely used and automated today, 3D digitization often still requires significant manual intervention, time and money. To overcome this, the authors have developed CultLab3D, the world's first fully automatic 3D mass digitization technology for collections of three-dimensional objects. 3D scanning robots such as the CultArm3D-P are specifically designed to automate the entire 3D digitization process thus allowing to capture and archive objects on a large-scale and produce highly accurate photo-realistic representations.
APA, Harvard, Vancouver, ISO, and other styles
5

Camargo, Jonathan, Aditya Ramanathan, Noel Csomay-Shanklin, and Aaron Young. "Automated gap-filling for marker-based biomechanical motion capture data." Computer Methods in Biomechanics and Biomedical Engineering 23, no. 15 (July 11, 2020): 1180–89. http://dx.doi.org/10.1080/10255842.2020.1789971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Crichton, D. J., L. Cinquini, H. Kincaid, A. Mahabal, A. Altinok, K. Anton, M. Colbert, et al. "From space to biomedicine: Enabling biomarker data science in the cloud." Cancer Biomarkers 33, no. 4 (April 18, 2022): 479–88. http://dx.doi.org/10.3233/cbm-210350.

Full text
Abstract:
NASA’s Jet Propulsion Laboratory (JPL) is advancing research capabilities for data science with two of the National Cancer Institute’s major research programs, the Early Detection Research Network (EDRN) and the Molecular and Cellular Characterization of Screen-Detected Lesions (MCL), by enabling data-driven discovery for cancer biomarker research. The research team pioneered a national data science ecosystem for cancer biomarker research to capture, process, manage, share, and analyze data across multiple research centers. By collaborating on software and data-driven methods developed for space and earth science research, the biomarker research community is heavily leveraging similar capabilities to support the data and computational demands to analyze research data. This includes linking diverse data from clinical phenotypes to imaging to genomics. The data science infrastructure captures and links data from over 1600 annotations of cancer biomarkers to terabytes of analysis results on the cloud in a biomarker data commons known as “LabCAS”. As the data increases in size, it is critical that automated approaches be developed to “plug” laboratories and instruments into a data science infrastructure to systematically capture and analyze data directly. This includes the application of artificial intelligence and machine learning to automate annotation and scale science analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Crichton, D. J., L. Cinquini, H. Kincaid, A. Mahabal, A. Altinok, K. Anton, M. Colbert, et al. "From space to biomedicine: Enabling biomarker data science in the cloud." Cancer Biomarkers 33, no. 4 (April 18, 2022): 479–88. http://dx.doi.org/10.3233/cbm-210350.

Full text
Abstract:
NASA’s Jet Propulsion Laboratory (JPL) is advancing research capabilities for data science with two of the National Cancer Institute’s major research programs, the Early Detection Research Network (EDRN) and the Molecular and Cellular Characterization of Screen-Detected Lesions (MCL), by enabling data-driven discovery for cancer biomarker research. The research team pioneered a national data science ecosystem for cancer biomarker research to capture, process, manage, share, and analyze data across multiple research centers. By collaborating on software and data-driven methods developed for space and earth science research, the biomarker research community is heavily leveraging similar capabilities to support the data and computational demands to analyze research data. This includes linking diverse data from clinical phenotypes to imaging to genomics. The data science infrastructure captures and links data from over 1600 annotations of cancer biomarkers to terabytes of analysis results on the cloud in a biomarker data commons known as “LabCAS”. As the data increases in size, it is critical that automated approaches be developed to “plug” laboratories and instruments into a data science infrastructure to systematically capture and analyze data directly. This includes the application of artificial intelligence and machine learning to automate annotation and scale science analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Schmoker, Bethany A., Shane Urban, and Arek J. Wiktor. "112 Developing Outpatient Registry to Capture Data Post Hospitalization." Journal of Burn Care & Research 43, Supplement_1 (March 23, 2022): S73. http://dx.doi.org/10.1093/jbcr/irac012.115.

Full text
Abstract:
Abstract Introduction Per the 2019 ABA reverification requirements, a burn center must see >75% of all inpatients (IP) who require an outpatient (OP) follow-up after discharge. In prior years, we utilized the inpatient registry and built a report to track patient follow-up. With the report, we were able to compare the number of Burn Clinic return patients against admissions to get the percentage. This process required hours of focused effort. We sought to optimize the process for determining IP follow-up at our ABA verified burn center. In addition, we hoped to better quantify the efficacy of our OP clinic. Methods An OP registry was developed in December 2019 utilizing an automated report from our electronic medical record (EMR) and imported into a custom built, secure, web-based software platform designed to support data capture for research studies. Employing various automation techniques, we were able to eliminate the need for manual abstraction by our burn registry team. Metrics tracked in the OP registry included: type of patient visit (New Patient, Return Patient, and Telehealth), diagnoses, zip-codes of patient residence, payer methods, and total number of clinic encounters per year. We collected data from January 2020 through the present, with 2020 being the first full year in the OP registry. The initial effort required to design, automate, and import data was approximately 18 hours. The report import takes approximately 5 minutes. Results The OP registry has given us the ability to create a multitude of graphs from the OP clinic data, like the one shown. During the review period our OP clinic saw patients from 19 different US states, encompassing 2,710 total OP visits. The median number of monthly OP clinic visits was 235 [IQR 210-246], see graph 1. The median number of clinic visits per patient was 2 [IQR 1-4]. The majority of clinic visits were return patients (55%, n = 1595), new patients (31%, n = 914), and telehealth visits (14%, n = 399). Finally, our analysis of the OP Clinic Registry demonstrated that we saw 82% (309/374) of inpatients that required follow-up care, exceeding the expected 75% by the ABA. Conclusions The creation of an automated OP registry can assist the tracking of discharged patients and reduce the amount of effort needed to track ABA required metrics. In addition, this OP registry can be expanded to track both IP and OP outcomes. This is crucial for quality improvement for the burn program as a whole.
APA, Harvard, Vancouver, ISO, and other styles
9

Wiltshire, S. E., D. G. Morris, and M. A. Beran. "Digital Data Capture and Automated Overlay Analysis for Basin Characteristic Calculation." Cartographic Journal 23, no. 1 (June 1986): 60–65. http://dx.doi.org/10.1179/caj.1986.23.1.60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Udoka, Silvanus J. "Automated data capture techniques: A prerequisite for effective integrated manufacturing systems." Computers & Industrial Engineering 21, no. 1-4 (January 1991): 217–21. http://dx.doi.org/10.1016/0360-8352(91)90091-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Umberger, Reba, Chayawat “Yo” Indranoi, Melanie Simpson, Rose Jensen, James Shamiyeh, and Sachin Yende. "Enhanced Screening and Research Data Collection via Automated EHR Data Capture and Early Identification of Sepsis." SAGE Open Nursing 5 (January 2019): 237796081985097. http://dx.doi.org/10.1177/2377960819850972.

Full text
Abstract:
Clinical research in sepsis patients often requires gathering large amounts of longitudinal information. The electronic health record can be used to identify patients with sepsis, improve participant study recruitment, and extract data. The process of extracting data in a reliable and usable format is challenging, despite standard programming language. The aims of this project were to explore infrastructures for capturing electronic health record data and to apply criteria for identifying patients with sepsis. We conducted a prospective feasibility study to locate and capture/abstract electronic health record data for future sepsis studies. We located parameters as displayed to providers within the system and then captured data transmitted in Health Level Seven® interfaces between electronic health record systems into a prototype database. We evaluated our ability to successfully identify patients admitted with sepsis in the target intensive care unit (ICU) at two cross-sectional time points and then over a 2-month period. A majority of the selected parameters were accessible using an iterative process to locate and abstract them to the prototype database. We successfully identified patients admitted to a 20-bed ICU with sepsis using four data interfaces. Retrospectively applying similar criteria to data captured for 319 patients admitted to ICU over a 2-month period was less sensitive in identifying patients admitted directly to the ICU with sepsis. Classification into three admission categories (sepsis, no-sepsis, and other) was fair (Kappa .39) when compared with manual chart review. This project confirms reported barriers in data extraction. Data can be abstracted for future research, although more work is needed to refine and create customizable reports. We recommend that researchers engage their information technology department to electronically apply research criteria for improved research screening at the point of ICU admission. Using clinical electronic health records data to classify patients with sepsis over time is complex and challenging.
APA, Harvard, Vancouver, ISO, and other styles
12

Sáenz-Adán, Carlos, Francisco J. García-Izquierdo, Beatriz Pérez, Trung Dong Huynh, and Luc Moreau. "Automated and non-intrusive provenance capture with UML2PROV." Computing 104, no. 4 (December 10, 2021): 767–88. http://dx.doi.org/10.1007/s00607-021-01012-x.

Full text
Abstract:
AbstractData provenance is a form of knowledge graph providing an account of what a system performs, describing the data involved, and the processes carried out over them. It is crucial to ascertaining the origin of data, validating their quality, auditing applications behaviours, and, ultimately, making them accountable. However, instrumenting applications, especially legacy ones, to track the provenance of their operations remains a significant technical hurdle, hindering the adoption of provenance technology. UML2PROV is a software-engineering methodology that facilitates the instrumentation of provenance recording in applications designed with UML diagrams. It automates the generation of (1) templates for the provenance to be recorded and (2) the code to capture values required to instantiate those templates from an application at run time, both from the application’s UML diagrams. By so doing, UML2PROV frees application developers from manual instrumentation of provenance capturing while ensuring the quality of recorded provenance. In this paper, we present in detail UML2PROV’s approach to generating application code for capturing provenance values via the means of Bindings Generation Module (BGM). In particular, we propose a set of requirements for BGM implementations and describe an event-based design of BGM that relies on the Aspect-Oriented Programming (AOP) paradigm to automatically weave the generated code into an application. Finally, we present three different BGM implementations following the above design and analyze their pros and cons in terms of computing/storage overheads and implications to provenance consumers.
APA, Harvard, Vancouver, ISO, and other styles
13

Campbell, S. "Tracking lean (automated shop floor data-capture technologies for lean automotive manufacturing)." Manufacturing Engineer 83, no. 1 (February 1, 2004): 38–41. http://dx.doi.org/10.1049/me:20040107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Rosenberg, Aaron S., Albert William Riedl, Michelle A. Quan, Kwan-Keat Ang, Joseph M. Tuscano, Naseem S. Esteghamat, Brian A. Jonas, et al. "Identifying Multiple Myeloma Patients Using Automated Data Capture from Electronic Medical Records." Blood 140, Supplement 1 (November 15, 2022): 7929–31. http://dx.doi.org/10.1182/blood-2022-165607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Brakewood, Candace, Niloofar Ghahramani, Jonathan Peters, Eunjin Kwak, and Jake Sion. "Real-Time Riders: A First Look at User Interaction Data from the Back End of a Transit and Shared Mobility Smartphone App." Transportation Research Record: Journal of the Transportation Research Board 2658, no. 1 (January 2017): 56–63. http://dx.doi.org/10.3141/2658-07.

Full text
Abstract:
A fundamental component of transit planning is understanding passenger travel patterns. However, traditional data sources used to study transit travel have some noteworthy drawbacks. For example, manual collection of travel surveys can be expensive, and data sets from automated fare collection systems often include only one transit system and do not capture multimodal trips (e.g., access and egress mode). New data sources from smartphone applications offer the opportunity to study transit travel patterns across multiple metropolitan regions and transit operators at little to no cost. Moreover, some smartphone applications integrate other shared mobility services, such as bikesharing, carsharing, and ride-hailing, which can provide a multimodal perspective not easily captured in traditional data sets. The objective of this research was to take a first look at an emerging data source: back-end data from user interactions with a smartphone application. The specific data set used in this paper was from a widely used smartphone application called Transit that provides real-time information about public transit and shared mobility services. Visualizations of individuals’ interactions with the Transit app were created to demonstrate three unique aspects of this data set: the ability to capture multicity transit travel, the ability to capture multiagency transit travel, and the ability to capture multimodal travel, such as the use of bikeshare to access transit. This data set was then qualitatively compared with traditional transit data sources, including travel surveys and automated fare collection data. The findings suggest that the data set has potential advantages over traditional data sources and could help transit planners better understand how passengers travel.
APA, Harvard, Vancouver, ISO, and other styles
16

Kaspar, Mathias, Georg Fette, Monika Hanke, Maximilian Ertl, Frank Puppe, and Stefan Störk. "Automated provision of clinical routine data for a complex clinical follow-up study: A data warehouse solution." Health Informatics Journal 28, no. 1 (January 2022): 146045822110580. http://dx.doi.org/10.1177/14604582211058081.

Full text
Abstract:
A deep integration of routine care and research remains challenging in many respects. We aimed to show the feasibility of an automated transformation and transfer process feeding deeply structured data with a high level of granularity collected for a clinical prospective cohort study from our hospital information system to the study’s electronic data capture system, while accounting for study-specific data and visits. We developed a system integrating all necessary software and organizational processes then used in the study. The process and key system components are described together with descriptive statistics to show its feasibility in general and to identify individual challenges in particular. Data of 2051 patients enrolled between 2014 and 2020 was transferred. We were able to automate the transfer of approximately 11 million individual data values, representing 95% of all entered study data. These were recorded in n = 314 variables (28% of all variables), with some variables being used multiple times for follow-up visits. Our validation approach allowed for constant good data quality over the course of the study. In conclusion, the automated transfer of multi-dimensional routine medical data from HIS to study databases using specific study data and visit structures is complex, yet viable.
APA, Harvard, Vancouver, ISO, and other styles
17

SUNG, RAYMOND CW, JAMES M. RITCHIE, THEODORE LIM, YING LIU, and ZOE KOSMADOUDI. "THE AUTOMATED GENERATION OF ENGINEERING KNOWLEDGE USING A DIGITAL ENGINEERING TOOL: AN INDUSTRIAL EVALUATION CASE STUDY." International Journal of Innovation and Technology Management 09, no. 06 (December 2012): 1271001. http://dx.doi.org/10.1142/s0219877012710010.

Full text
Abstract:
In a knowledge-based economy, it will be crucial to capture expertise and rationale in working environments of all kinds as the need develops to understand how people are working, the intuitive processes they use as they carry out tasks and make decisions and trying to determine the most effective methods and rationales for solving problems. Key outputs from this will be the capability to automate decision making activities and supporting training and learning in competitive business environments. Knowledge capture in knowledge-based economies will also be important in a wide range of sectors from the financial and business domains through to engineering and construction. In traditional expert environments, current manual knowledge capture techniques tend to be time-consuming, turgid and, if applied during an activity, interrupt the "expert" whilst they are carrying out the task. The alternative is to do this after the event, which loses important information about the process due to the individual usually forgetting a great deal of the decisions and alternatives they have used during a task session. With the advent and widespread use of computerized technology within business, this paper contends that new opportunities exist with regard to user logging and subsequent data analysis which mean that there is considerable potential for automating or semi-automating this kind of knowledge capture. As a case study demonstrating the possibility of attaining automated knowledge capture, this work investigates product design. Within long lifecycle products of all kinds there is a need to capture the engineering rationale, process, information and knowledge created during a design session. Once these data has been captured, in an automated and unobtrusive manner, it must be represented in a fashion which allows it to be easily accessible, understandable, stored and reused at a later date. This can subsequently be used to inform experienced engineers of decisions taken much earlier in the design process or used to train and support inexperienced engineers while they are moving up the learning curve. Having these data available is especially important in long lifecycle projects since many design decisions are made early on in the process and are then required to be understood by engineers a number of years down the line. There is also the likelihood that if an engineer were to leave during the project, any undocumented design knowledge relating to their contribution to the design process will leave with them. This paper describes research on non-intrusively capturing and formalizing product lifecycle knowledge by demonstrating the automated capture of engineering processes through user logging using an immersive virtual reality (VR) system for cable harness design and assembly planning. Furthermore, several industrial collaborators of the project have been visited to determine what their knowledge capture practices are; these findings are also detailed. Computerized technology and business management systems in the knowledge-based economies of the future will require the capture of expertise as quickly and effectively as possible with minimum overhead to the company along with the formal storage and access to such key data. The application of the techniques and knowledge representations presented in this paper demonstrate the potential for doing this in both engineering and non-engineering domains.
APA, Harvard, Vancouver, ISO, and other styles
18

He, Jin, Kuo-Yi Lin, and Ya Dai. "A Data-Driven Innovation Model of Big Data Digital Learning and Its Empirical Study." Information Dynamics and Applications 1, no. 1 (December 27, 2022): 35–43. http://dx.doi.org/10.56578/ida010105.

Full text
Abstract:
Digital learning is the use of telecommunication technology to deliver information for education and training. As the increased acceleration of the propagation speed of the web, a lot of data collected by automated or semi-automated way. The 4s (Volume, Velocity, Variety and Veracity) of big data increase the challenge to extract useful value via systemic framework. This study aims to construct the data model of big data digital learning. Based on the digital learning data, data-driven innovation framework was proposed to identify data form and collect data. Bayesian network was proposed to capture learning model to extract user experience of students to enhance learning efficiency. Empirical study was conducted on a university to validate the proposed approach. The results have been implemented to support the strategies to improve student learning outcomes and competitiveness.
APA, Harvard, Vancouver, ISO, and other styles
19

Hsiou, Dong-Chong, Fay Huang, Fu Jie Tey, Tin-Yu Wu, and Yi-Chuan Lee. "An Automated Crop Growth Detection Method Using Satellite Imagery Data." Agriculture 12, no. 4 (April 2, 2022): 504. http://dx.doi.org/10.3390/agriculture12040504.

Full text
Abstract:
This study develops an automated crop growth detection APP, with the functionality to access the cadastral data for the target field, that was to be used for a satellite-imagery-based field survey. A total of 735 ground-truth records of the cabbage cultivation areas in Yunlin were collected via the implemented APP in order to train a deep learning model to make accurate predictions of the growth stages of the cabbage from 0 to 70 days. A regression analysis was performed by the gradient boosting decision tree (GBDT) technique. The model was trained on multitemporal multispectral satellite images, which were retrieved from the ground-truth data. The experimental results show that the mean average error of the predictions is 8.17 days, and that 75% of the predictions have errors less than 11 days. Moreover, the GBDT algorithm was also adopted for the classification analysis. After planting, the cabbage growth stages can be divided into the cupping, early heading, and mature stages. For each stage, the prediction capture rate is 0.73, 0.51, and 0.74, respectively. If the days of growth of the cabbages are partitioned into two groups, the prediction capture rate for 0–40 days is 0.83, and that for 40–70 days is 0.76. Therefore, by applying appropriate data mining techniques, together with multitemporal multispectral satellite images, the proposed method can predict the growth stages of the cabbage automatically, which can assist the governmental agriculture department to make cabbage yield predictions when creating precautionary measures to deal with the imbalance between production and sales when needed.
APA, Harvard, Vancouver, ISO, and other styles
20

Adelson, Kerin B., Kim Framski, Patricia Lazette, Teresita Vega, and Rogerio Lilenbaum. "An electronic intervention to improve structured cancer stage data capture." Journal of Clinical Oncology 34, no. 7_suppl (March 1, 2016): 151. http://dx.doi.org/10.1200/jco.2016.34.7_suppl.151.

Full text
Abstract:
151 Background: Cancer Staging is critical for prognosticating, treatment planning, outcomes analysis, registry reporting and clinical trial eligibility determination. Oncology EHRs have structured staging modules but use by physicians is inconsistent. Typically, stage is entered as unstructured free text in clinical notes and cannot be used for reporting. Instead, institutions depend the tumor registry (TR) which typically lag 6 months behind. Our Cancer Committee determined that real-time capture of structured cancer staging was an imperative. Methods: We created an EPIC best practice advisory (BPA) decision support tool that requires physicians to enter cancer stage if the following criteria are met: 1)unstaged cancer on the problem list 2)EPIC staging module exists for that cancer 3)physician is from a specialty with staging expertise. This BPA was implemented 12/18/14. If physicians chose not to stage they had to enter a reason why. Choices were: 1) cancer diagnosed before 2014, at which the BPA was permanently removed 2) staging studies not yet completed, at which the BPA fired at a future encounter 3) Not a staging provider, at which the BPA no longer fires for that individual provider 4) Cannot stage: document reason, at which the BPA was permanently removed. Results: We used TR data to determine the number of patients who were eligible for staging. In 12 months prior to the intervention, 1480/5222, or 28% of patients who were eligible for staging were staged in the structured staging module. After we launched the intervention, between 12/18/14 and 4/30/15, 1654/1831 or 90% of eligible patients were staged electronically. This is an absolute improvement of > 200% Conclusions: Electronic decision support can dramatically improve rates of structured staging. Such data allows automated reports for clinical trial screening, outcomes analysis, quality comparisons, and reporting. We are now building automated reports for: clinical trial eligibility, Commission on Cancer/ QOPI breast, colon and lung measures, rates of palliative care consultation for advanced disease and outcome measures like disease free interval by stage and overall survival.
APA, Harvard, Vancouver, ISO, and other styles
21

Elder, D. H. J., F. Shearer, A. Dawson, M. Pradeep, H. Parry, P. Currie, S. D. Pringle, J. George, A. Choy, and C. Lang. "012 Automated data capture from echocardiography reports to enhance heart failure population research." Heart 98, Suppl 1 (May 2012): A10.1—A10. http://dx.doi.org/10.1136/heartjnl-2012-301877b.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kim, Jeffrey, and Maaz Khan. "Automated 360° Photographic Data Capture for Building Construction Projects: Analysis of a Prototype." IOP Conference Series: Earth and Environmental Science 1101, no. 8 (November 1, 2022): 082003. http://dx.doi.org/10.1088/1755-1315/1101/8/082003.

Full text
Abstract:
Abstract Effectively carrying out the digital documentation of a construction project site makes it possible to capture important events, allows for the measurement of progress, and creates an archive of data that can be called upon if issues arise in the future. It is best if the documentation can occur at regularly scheduled intervals to eliminate the possibility of missing important project details. Anecdotally, it has been found that practitioners are not good at keeping up with this important task – it is often viewed as a chore for a junior-level employee to undertake when there are no other important tasks to accomplish. An alternative is necessary. Robotic automation is often viewed as a good replacement for tedious activities that require a degree of accuracy in repetitiveness. Considering the possibilities for this type of automation, the researchers sought to understand the accuracy and reliability of a prototype programmable ground-based drone to automatically capture 360° project photographs. Experimentation was conducted by scoring the drone’s ability to complete a 10-point trial run across three different terrain types. This study focused on how terrain affects the drone’s accuracy and validates the findings by incorporating industry feedback about the prototype being proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
23

Haston, Elspeth, Robert Cubey, and David J. Harris. "Data concepts and their relevance for data capture in large scale digitisation of biological collections." International Journal of Humanities and Arts Computing 6, no. 1-2 (March 2012): 111–19. http://dx.doi.org/10.3366/ijhac.2012.0042.

Full text
Abstract:
Logistically, the data associated with biological collections can be divided into three main categories for digitisation: i) Label Data: the data appearing on the specimen on a label or annotation; ii) Curatorial Data: the data appearing on containers, boxes, cabinets and folders which hold the collections; iii) Supplementary Data: the data held separately from the collections in indices, archives and literature. Each of these categories of data have fundamentally different properties within the digitisation framework which have implications for the data capture process. These properties were assessed in relation to alternative data entry workflows and methodologies to create a more efficient and accurate system of data capture. We see a clear benefit in the prioritisation of curatorial data in the data capture process. These data are often only available at the cabinets, they are in a format suitable for allowing rapid data entry, and they result in an accurate cataloguing of the collections. Finally, the capture of a high resolution digital image enables additional data entry to be separated into multiple sweeps, and optical character recognition (OCR) software can be used to facilitate sorting images for fuller data entry, and giving potential for more automated data entry in the future.
APA, Harvard, Vancouver, ISO, and other styles
24

Sanson Neto, Odair José, Leonardo Göbel Fernandes, Alceu André Badin, Winderson Eugenio dos Santos, and Eduardo Félix Ribeiro Romaneli. "System for acquisition and transmission of thermal images to monitor equipment of an electrical substation." Journal of Engineering and Exact Sciences 8, no. 7 (October 13, 2022): 14750–01. http://dx.doi.org/10.18540/jcecvl8iss7pp14750-01e.

Full text
Abstract:
Thermographic monitoring systems are achieving satisfactory results in identifying points with anomalous temperature, preventing faults to occur. In the development of a methodology for automatic analysis of thermal images, an automated system for data acquisition and transmission is necessary. This article aims to compare the different ways to acquire images from a thermal camera FLIR® A700: Atlas SDK, web interface and Rest API, among which the latter stands out because of its simplicity and easy implementation. The article also describes the developed software for automated capture of thermal images and the communication system developed to transmit these images.
APA, Harvard, Vancouver, ISO, and other styles
25

Paquette, Steven, J. David Brantley, Brian D. Corner, Peng Li, and Thomas Oliver. "Automated Extraction of Anthropometric Data from 3D Images." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 38 (July 2000): 727–30. http://dx.doi.org/10.1177/154193120004403811.

Full text
Abstract:
The use of 3D scanning systems for the capture and measurement of human body dimensions is becoming commonplace. While the ability of available scanning systems to record the surface anatomy of the human body is generally regarded as acceptable for most applications, effective use of the images to obtain anthropometric data requires specially developed data extraction software. However, for large data sets, extraction of useful information can be quite time consuming. A major benefit therefore is to possess an automated software program that quickly facilitates the extraction of reliable anthropometric data from 3D scanned images. In this paper the accuracy and variability of two fully automated data extraction systems (Cyberware WB-4 scanner with Natick-Scan software and Hamamatsu BL Scanner with accompanying software) are examined and compared with measurements obtained from traditional anthropometry. In order to remove many confounding variables that living humans introduce during the scanning process, a set of clothing dressforms was chosen as the focus of study. An analysis of the measurement data generally indicates that automated data extraction compares favorably with standard anthropometry for some measurements but requires additional refinement for others.
APA, Harvard, Vancouver, ISO, and other styles
26

Hamilton, Dane, Katina Michael, and Samuel Fosso Wamba. "Overcoming Visibility Issues in a Small-to-Medium Retailer Using Automatic Identification and Data Capture Technology." International Journal of E-Business Research 6, no. 2 (April 2010): 21–44. http://dx.doi.org/10.4018/jebr.2010040102.

Full text
Abstract:
In this paper, the authors the inventory control practices of a small-to-medium retailer to identify common challenges this type of organization experiences with respect to automated data capture (ADC) and the implementation of an enterprise wide information system. The study explores a single case of a hardware store in a regional town in New South Wales, Australia. Four semi-structured interviews were conducted with employees, focusing on issues related to inventory control including delivery discrepancies, checking and sorting of orders, locating stock and goods, loss prevention, customer purchasing and point of sale processing and replenishment. Flowcharts illustrate the current processes of the retailer with an understanding of how ADC technologies like bar code and radio-frequency identification (RFID) impact the retailer. The findings promote an evolutionary approach toward the use of automated data capture technology by adopting barcode technology and subsequently introducing the complementary RFID technology.
APA, Harvard, Vancouver, ISO, and other styles
27

Yuan, Hao, Calder Atta, Luke Tornabene, and Chenhong Li. "Assexon: Assembling Exon Using Gene Capture Data." Evolutionary Bioinformatics 15 (January 2019): 117693431987479. http://dx.doi.org/10.1177/1176934319874792.

Full text
Abstract:
Exon capture across species has been one of the most broadly applied approaches to acquire multi-locus data in phylogenomic studies of non-model organisms. Methods for assembling loci from short-read sequences (eg, Illumina platforms) that rely on mapping reads to a reference genome may not be suitable for studies comprising species across a wide phylogenetic spectrum; thus, de novo assembling methods are more generally applied. Current approaches for assembling targeted exons from short reads are not particularly optimized as they cannot (1) assemble loci with low read depth, (2) handle large files efficiently, and (3) reliably address issues with paralogs. Thus, we present Assexon: a streamlined pipeline that de novo assembles targeted exons and their flanking sequences from raw reads. We tested our method using reads from Lepisosteus osseus (4.37 Gb) and Boleophthalmus pectinirostris (2.43 Gb), which are captured using baits that were designed based on genome sequence of Lepisosteus oculatus and Oreochromis niloticus, respectively. We compared performance of Assexon to PHYLUCE and HybPiper, which are commonly used pipelines to assemble ultra-conserved element (UCE) and Hyb-seq data. A custom exon capture analysis pipeline (CP) developed by Yuan et al was compared as well. Assexon accurately assembled more than 3400 to 3800 (20%-28%) loci than PHYLUCE and more than 1900 to 2300 (8%-14%) loci than HybPiper across different levels of phylogenetic divergence. Assexon ran at least twice as fast as PHYLUCE and HybPiper. Number of loci assembled using CP was comparable with Assexon in both tests, while Assexon ran at least 7 times faster than CP. In addition, some steps of CP require the user’s interaction and are not fully automated, and this user time was not counted in our calculation. Both Assexon and CP retrieved no paralogs in the testing runs, but PHYLUCE and Hybpiper did. In conclusion, Assexon is a tool for accurate and efficient assembling of large read sets from exon capture experiments. Furthermore, Assexon includes scripts to filter poorly aligned coding regions and flanking regions, calculate summary statistics of loci, and select loci with reliable phylogenetic signal. Assexon is available at https://github.com/yhadevol/Assexon .
APA, Harvard, Vancouver, ISO, and other styles
28

Reddy, Y. Sai Subhash, Koye Sai Vishnu Vamsi, Golla Akhila, Anudeep Poonati*, and Koye Jayanth. "An Automated Baby Monitoring System." International Journal of Engineering and Advanced Technology 10, no. 6 (August 30, 2021): 114–19. http://dx.doi.org/10.35940/ijeat.f3058.0810621.

Full text
Abstract:
Now a days caring for an infant is a tough job for parents who are working away in different location. This task presents an infant observing framework for occupied guardians so they can guarantee the appropriate consideration and wellbeing of their children. This framework can recognize the child’s movement and helps in detecting audio; particularly crying and infant’s current position can be predicted using CNN so the parent can check the status of the infant along with the sensor data while away from the infant. The proposed work will read the data from various sensors and then the data is processed by the Raspberry PI continuously. The PI camera is also integrated to capture the pictures from the video stream of the baby. Hence there will be continuous monitoring of the baby. This infant checking framework is equipped for distinguishing temperature and crying state of the child naturally. The Raspberry Pi B module is utilized in managing all the connected components. Sound sensor is utilized to distinguish child’s crying, temperature sensor to identify infant’s temperature and Pi camera is utilized to catch the infant’s condition and capture the video or photographs of the baby during the abnormal conditions and send them to the guardian or parent over the internet using the IOT modules. This proposed framework can give a simpler and helpful route for occupied guardians as far as dealing with their infants.
APA, Harvard, Vancouver, ISO, and other styles
29

Montaser, Ali, Ibrahim Bakry, Adel Alshibani, and Osama Moselhi. "Estimating productivity of earthmoving operations using spatial technologies1This paper is one of a selection of papers in this Special Issue on Construction Engineering and Management." Canadian Journal of Civil Engineering 39, no. 9 (September 2012): 1072–82. http://dx.doi.org/10.1139/l2012-059.

Full text
Abstract:
This paper presents an automated method for estimating productivity of earthmoving operations in near-real-time. The developed method utilizes Global Positioning System (GPS) and Google Earth to extract the data needed to perform the estimation process. A GPS device is mounted on a hauling unit to capture the spatial data along designated hauling roads for the project. The variations in the captured cycle times were used to model the uncertainty associated with the operation involved. This was carried out by automated classification, data fitting, and computer simulation. The automated classification is applied through a spreadsheet application that classifies GPS data and identifies, accordingly, durations of different activities in each cycle using spatial coordinates and directions captured by GPS and recorded on its receiver. The data fitting was carried out using commercially available software to generate the probability distribution functions used in the simulation software “Extend V.6”. The simulation was utilized to balance the production of an excavator with that of the hauling units. A spreadsheet application was developed to perform the calculations. An example of an actual project was analyzed to demonstrate the use of the developed method and illustrates its essential features. The analyzed case study demonstrates how the proposed method can assist project managers in taking corrective actions based on the near-real-time actual data captured and processed to estimate productivity of the operations involved.
APA, Harvard, Vancouver, ISO, and other styles
30

Moitra, Abha, Ravi Palla, and Arvind Rangarajan. "Automated Capture and Execution of Manufacturability Rules Using Inductive Logic Programming." Proceedings of the AAAI Conference on Artificial Intelligence 30, no. 2 (February 18, 2016): 4028–34. http://dx.doi.org/10.1609/aaai.v30i2.19080.

Full text
Abstract:
Capturing domain knowledge can be a time-consuming process that typically requires the collaboration of a Subject Matter Expert and a modeling expert to encode the knowledge. In a number of domains and applications, this situation is further exacerbated by the fact that the Subject Matter Expert may find it difficult to articulate the domain knowledge as a procedure or rules, but instead may find it easier to classify instance data. To facilitate this type of knowledge elicitation from Subject Matter Experts, we have developed a system that automatically generates formal and executable rules from provided labeled instance data. We do this by leveraging the techniques of Inductive Logic Programming (ILP) to generate Horn clause based rules to separate out positive and negative instance data. We illustrate our approach on a Design For Manufacturability (DFM) platform where the goal is to design products that are easy to manufacture by providing early manufacturability feedback. Specifically we show how our approach can be used to generate feature recognition rules from positive and negative instance data supplied by Subject Matter Experts. Our platform is interactive, provides visual feedback and is iterative. The feature identification rules generated can be inspected, manually refined and vetted.
APA, Harvard, Vancouver, ISO, and other styles
31

McClung, Melissa W., Sarah A. Gumm, Megan E. Bisek, Amber L. Miller, Bryan C. Knepper, and Arthur J. Davidson. "Managing public health data: mobile applications and mass vaccination campaigns." Journal of the American Medical Informatics Association 25, no. 4 (November 13, 2017): 435–39. http://dx.doi.org/10.1093/jamia/ocx136.

Full text
Abstract:
Abstract In response to data collection challenges during mass immunization events, Denver Public Health developed a mobile application to support efficient public health immunization and prophylaxis activities. The Handheld Automated Notification for Drugs and Immunizations (HANDI) system has been used since 2012 to capture influenza vaccination data during Denver Health’s annual employee influenza campaign. HANDI has supported timely and efficient administration and reporting of influenza vaccinations through standardized data capture and database entry. HANDI’s mobility allows employee work locations and schedules to be accommodated without the need for a paper-based data collection system and subsequent manual data entry after vaccination. HANDI offers a readily extensible model for mobile data collection to streamline vaccination documentation and reporting, while improving data quality and completeness.
APA, Harvard, Vancouver, ISO, and other styles
32

Mauntel, Timothy C., Darin A. Padua, Laura E. Stanley, Barnett S. Frank, Lindsay J. DiStefano, Karen Y. Peck, Kenneth L. Cameron, and Stephen W. Marshall. "Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System." Journal of Athletic Training 52, no. 11 (November 1, 2017): 1002–9. http://dx.doi.org/10.4085/1062-6050-52.10.12.

Full text
Abstract:
Context: The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle. Objective: To determine the reliability of an automated markerless motion-capture system for scoring the LESS. Design: Cross-sectional study. Setting: United States Military Academy. Patients or Other Participants: A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg). Main Outcome Measure(s): Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score. Results: We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons. Conclusions: A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use the markerless motion-capture system to reliably score the LESS without being limited by the time requirements of manual LESS scoring.
APA, Harvard, Vancouver, ISO, and other styles
33

Hussein, Sherif El-Sayed. "Intelligent Assessment for Pathological Gait Analysis." Key Engineering Materials 437 (May 2010): 334–38. http://dx.doi.org/10.4028/www.scientific.net/kem.437.334.

Full text
Abstract:
The accurate assessment of pathological gait for individual subjects is a major problem in rehabilitation centers. Automated or semi-automated gait analysis systems are important in assisting physicians in the diagnosis of various diseases. However, these systems are not only highly sophisticated but also require superior quality cameras and complex software which capture large amount of data that often proves difficult to interpret for clinical staff trying to gain insight into the patient’s condition. Automation and simplification of the analysis of gait data is therefore necessary if it is to be used more productively. This research proposes a simple and cost effective approach that utilizes artificial intelligence techniques to automate the analysis and diagnosis processes. It also offers a means to compare different treatment methods and their effectiveness during the course of treatment. Visualization software has also been developed to increase the diagnostic reliability.
APA, Harvard, Vancouver, ISO, and other styles
34

Cheng, S., D. D. Lichti, and J. Matyas. "AUTOMATED GENERATION OF HIGH-QUALITY 3D POINT CLOUDS OF ANTLERS USING LOW-COST RANGE CAMERAS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 531–38. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-531-2022.

Full text
Abstract:
Abstract. Three-dimensional imaging demonstrates advantages over traditional methods and has already proven feasible for measuring antler growth. However, antlers' velvet-covered surface and irregular structure pose challenges in efficiently obtaining high-quality antler data. Animal data capture using optical imaging devices and point cloud segmentation still require tedious manual work. To obtain 3D data of irregular biological targets like antlers, this paper proposes an automated workflow of high-quality 3D antler point cloud generation using low-cost range cameras. An imaging system of range cameras and one RGB camera is developed for automatic camera triggering and data collection without motion artifacts. The imaging system enables motion detection to ensure data collection occurs without any appreciable animal movement. The antler data are extracted automatically based on a fast k-d tree neighbor search to remove the irrelevant data. Antler point clouds from different cameras captured with various poses are aligned using target-based registration and the normal distribution transformation (NDT). The two-step registration demonstrates precisions of the overall RMSE of 4.8mm for the target-based method and Euclidean fitness score of 10.5mm for the NDT. Complete antler point clouds are generated with a higher density than that of individual frames and improved quality with outliers removed.
APA, Harvard, Vancouver, ISO, and other styles
35

Otto, William. "#41 Manual Validation of an Automated Tool to Extract Blood Culture and Susceptibility Data from the Electronic Health Record for Children with Acute Myeloid Leukemia." Journal of the Pediatric Infectious Diseases Society 11, Supplement_1 (June 14, 2022): S2. http://dx.doi.org/10.1093/jpids/piac041.004.

Full text
Abstract:
Abstract Background Children with acute myeloid leukemia (AML) receive high-intensity chemotherapy to achieve durable remission. AML chemotherapy causes bone marrow suppression resulting in vulnerability to infection, most frequently bloodstream infections (BSI). While the most commonly described organisms include Gram-negative pathogens and streptococci, the epidemiology and resistant profile of these pathogens can change. It is important to monitor the epidemiology of these infections over time to inform clinical care. Previously capture of these microbiology data required laborious manual chart reviews that are often done intermittently and with variable accuracy, limiting the impact of the results. We sought to develop and validate an automated process for extracting blood culture results, including antimicrobial susceptibility profiles for positive results from the electronic health record (EHR). Method An automated tool to extract blood culture results from the EHR (Epic Systems, Verona WI) was developed using SQL (Structured Query Language). The tool was applied to the EHR of all children with newly diagnosed AML treated at the Children’s Hospital of Philadelphia from January 1, 2011 to December 31, 2020, regardless of subsequent relapse and treatment. Data from all blood cultures (including standard, fungal, mycobacterial, and subacute bacterial endocarditis blood cultures) were captured. Manual chart review was performed by an Infectious Diseases physician to abstract the same blood culture results to determine accuracy of the automated extraction tool. The manual abstraction was considered the gold standard. The BSI epidemiology of AML patients during this time period were described to illustrate the utility of this tool. Results There were 91 children with newly diagnosed AML who received chemotherapy during the study period. Of the collected 3,150 cultures obtained and resulted, 206 (6.5%) were positive. There were 37 distinct pathogens identified (Table 1). Of positive cultures, 114 (55.3%) were reflexed to antimicrobial susceptibility testing (AST) per institutional standards. In total, 1,427 AST results were captured in the automated tool. Manual validation confirmed that concordance between the automated abstraction and chart review was 100% for the organisms grown on culture and accurately identified all AST results. The majority of organisms were Gram-positive. The most frequently identified species was Streptococcus mitis/oralis, found in 40/206 (19.4%) of positive cultures (Table 1). Of S. mitis/oralis that underwent AST, only 8/21 (38.1%) were susceptible to penicillin. Escherichia coli was the most common Gram-negative pathogen. Candida spp. accounted for 9.3% of detected pathogens, with C. krusei identified most commonly. Conclusion We developed and automated tool to extract blood culture results from the EHR of children and adolescents with AML at a single center. This tool was manually validated and found to be 100% accurate. This tool has the ability to efficiently capture the BSI epidemiology of a specific patient population at high risk for BSI. Further work is needed to confirm the accuracy of this tool at more centers. Implementation at other sites will allow this tool to be employed for various purposes including large, multicenter epidemiology research studies or quality improvement projects.
APA, Harvard, Vancouver, ISO, and other styles
36

Dore, C., and M. Murphy. "CURRENT STATE OF THE ART HISTORIC BUILDING INFORMATION MODELLING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W5 (August 18, 2017): 185–92. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w5-185-2017.

Full text
Abstract:
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
APA, Harvard, Vancouver, ISO, and other styles
37

Riabov, Anton, Shirin Sohrabi, Daby Sow, Deepak Turaga, Octavian Udrea, and Long Vu. "Planning-Based Reasoning for Automated Large-Scale Data Analysis." Proceedings of the International Conference on Automated Planning and Scheduling 25 (April 8, 2015): 282–90. http://dx.doi.org/10.1609/icaps.v25i1.13689.

Full text
Abstract:
In this paper, we apply planning-based reasoning to orchestrate the data analysis process automatically, with a focus on two applications: early detection of health complications in critical care, and detection of anomalous behaviors of network hosts in enterprise networks. Our system uses expert knowledge and AI planning to reason about possibly incomplete, noisy, or inconsistent observations, derived from data by deploying an open set of analytics, to generate plausible and consistent hypotheses about the state of the world. From these hypotheses, relevant actions are triggered leading to the deployment of additional analytics, or adaptation of existing analytics, that produce new observations for further reasoning. Planning-based reasoning is enabled by knowledge models obtained from domain experts that describe entities in the world, their states, and relationship to observations. To address the associated knowledge engineering challenges, we propose a modeling language named LTS++ and build an Integrated Development Environment. We also develop a process that provides support and guidance to domain experts, with no planning expertise, in defining and constructing models. We use this modeling process to capture knowledge for the two applications and to collect user feedback. Furthermore, we conduct empirical evaluation to demonstrate the feasibility of our approach and the benefits of using planning-based reasoning in these applications, at large real-world scales. Specifically, in the network monitoring scenario, we show that the system can dynamically deploy and manage analytics for the effective detection of anomalies and malicious behaviors with lead times of over 15 minutes, in an enterprise network with over 2 million hosts (entities).
APA, Harvard, Vancouver, ISO, and other styles
38

Williams, Dean K. "Capture Codes for Better Geology." SEG Discovery, no. 128 (January 1, 2022): 15–23. http://dx.doi.org/10.5382/segnews.2022-128.fea-01.

Full text
Abstract:
Abstract Automated core logging technology is starting to replace geologists in the core shed. Often-cited justifications for this include increased speed, multivariate sensors, and the perception that manual logging can be subjective and produces inconsistent data. An alternative is to keep the geologists and replace subjectivity with objectivity. The common practice of selecting lithology from a predetermined list of rock types can force subjective decisions. These lead to data inconsistencies that tend to increase as the rocks become progressively hydrothermally altered. A system of capture codes following the maxim, “first observe, then interpret,” is proposed as a tool to improve coding consistency and collect geologic data with greater resolution. The codes capture empirical geologic observations in a systematic and comprehensive fashion to produce a compact, computer-friendly format that facilitates data synthesis, analysis, and 3-D visualization. Capture codes do not replace any existing project or standardized company summary codes for rock types, alteration facies, or degree and style of mineralization. They capture the underlying, specific geologic observations required to make correct and consistent summary code categorizations. In other words, capture codes are empirical data, while summary codes are often subjective labels. Utilization of the codes improves the understanding of the project geology and consistency in coding between geologists, while simultaneously strengthening their field skills. After reading this article, a geologist should be able to pick up any rock and capture code the lithology and, if applicable, its alteration and mineralization as well.
APA, Harvard, Vancouver, ISO, and other styles
39

Vedavalli, Perigisetty, and Deepak Chenu. "A Deep Learning Based Data Recovery Approach for Missing and Erroneous Data of IoT Nodes." Sensors 23, no. 1 (December 24, 2022): 170. http://dx.doi.org/10.3390/s23010170.

Full text
Abstract:
Internet of things (IoT) nodes are deployed in large-scale automated monitoring applications to capture the massive amount of data from various locations in a time-series manner. The captured data are affected due to several factors such as device malfunctioning, unstable communication, environmental factors, synchronization problem, and unreliable nodes, which results in data inconsistency. Data recovery approaches are one of the best solutions to reduce data inconsistency. This research provides a missing data recovery approach based on spatial-temporal (ST) correlation between the IoT nodes in the network. The proposed approach has a clustering phase (CL) and a data recovery (DR) phase. In the CL phase, the nodes can be clustered based on their spatial and temporal relationship, and common neighbors are extracted. In the DR phase, missing data can be recovered with the help of neighbor nodes using the ST-hierarchical long short-term memory (ST-HLSTM) algorithm. The proposed algorithm has been verified on real-world IoT-based hydraulic test rig data sets which are gathered from things speak real-time cloud platform. The algorithm shows approximately 98.5% reliability as compared with the other existing algorithms due to its spatial-temporal features based on deep neural network architecture.
APA, Harvard, Vancouver, ISO, and other styles
40

Gaidenkov, A. V., M. I. Kanevskiy, A. S. Ostrovskiy, O. I. Ganyak, and N. Yu Chizhov. "Technology of automated video observation of a drogue-sensor basket in the problem of autonomous aerial refueling." Civil Aviation High Technologies 25, no. 4 (September 6, 2022): 20–43. http://dx.doi.org/10.26467/2079-0619-2022-25-4-20-43.

Full text
Abstract:
The paper proposes a technology for automated video-based observation (VBO) of a drogue-sensor in the problem of aerial refueling. The technology is based on the use of a passive optoelectronic system and incorporates the logic of automated refueling observation of a refueling process using algorithms for the automatic detection and tracking of a drogue-sensor, a methodical apparatus for suboptimal linear filtering of the observed process under the conditions of spatial and temporary non-stationarity of the refueling process, algorithms for automatic correlation detection and tracking of a drogue-sensor using suboptimal filtering. An analysis of the design of experimental foreign systems for autonomous aerial refueling is carried out. The choice of the algorithm for the functioning of the synthetic vision system is substantiated. It is established that the main observation procedures: detection, capture for tracking and determination of the current drogue coordinates with a given rate and quality should be performed automatically, the pilot-operator takes part in the operation of the synthetic vision system in case of capture errors or mistracking. The statement of the problem for automated VBO of a drogue-sensor is formulated. A structural-logical diagram of the automated observation process, including the detection and tracking of a drogue, as well as decision-making by the pilot in various situations, is proposed. A modeling complex for a synthetic vision system operation is presented. The results of experimental studies of the synthetic vision system efficiency are presented. Based on the developed technology and the results of evaluating the effectiveness of automated observation algorithms, a strategy for performing autonomous refueling in conditions of various turbulence is proposed, while, during weak turbulence, a successful engagement is provided by tracking the center of drogue oscillations, in turn, under conditions of severe turbulence, a successful engagement can be provided by tracking a drogue controlled according to the synthetic vision system data.
APA, Harvard, Vancouver, ISO, and other styles
41

Wawer, Mathias J., David E. Jaramillo, Vlado Dančík, Daniel M. Fass, Stephen J. Haggarty, Alykhan F. Shamji, Bridget K. Wagner, Stuart L. Schreiber, and Paul A. Clemons. "Automated Structure–Activity Relationship Mining." Journal of Biomolecular Screening 19, no. 5 (April 7, 2014): 738–48. http://dx.doi.org/10.1177/1087057114530783.

Full text
Abstract:
Understanding the structure–activity relationships (SARs) of small molecules is important for developing probes and novel therapeutic agents in chemical biology and drug discovery. Increasingly, multiplexed small-molecule profiling assays allow simultaneous measurement of many biological response parameters for the same compound (e.g., expression levels for many genes or binding constants against many proteins). Although such methods promise to capture SARs with high granularity, few computational methods are available to support SAR analyses of high-dimensional compound activity profiles. Many of these methods are not generally applicable or reduce the activity space to scalar summary statistics before establishing SARs. In this article, we present a versatile computational method that automatically extracts interpretable SAR rules from high-dimensional profiling data. The rules connect chemical structural features of compounds to patterns in their biological activity profiles. We applied our method to data from novel cell-based gene-expression and imaging assays collected on more than 30,000 small molecules. Based on the rules identified for this data set, we prioritized groups of compounds for further study, including a novel set of putative histone deacetylase inhibitors.
APA, Harvard, Vancouver, ISO, and other styles
42

Ngoc Anh, Bui, Ngo Tung Son, Phan Truong Lam, Le Phuong Chi, Nguyen Huu Tuan, Nguyen Cong Dat, Nguyen Huu Trung, Muhammad Umar Aftab, and Tran Van Dinh. "A Computer-Vision Based Application for Student Behavior Monitoring in Classroom." Applied Sciences 9, no. 22 (November 6, 2019): 4729. http://dx.doi.org/10.3390/app9224729.

Full text
Abstract:
Automated learning analytics is becoming an essential topic in the educational area, which needs effective systems to monitor the learning process and provides feedback to the teacher. Recent advances in visual sensors and computer vision methods enable automated monitoring of behavior and affective states of learners at different levels, from university to pre-school. The objective of this research was to build an automatic system that allowed the faculties to capture and make a summary of student behaviors in the classroom as a part of data acquisition for the decision making process. The system records the entire session and identifies when the students pay attention in the classroom, and then reports to the facilities. Our design and experiments show that our system is more flexible and more accurate than previously published work.
APA, Harvard, Vancouver, ISO, and other styles
43

Carlson, Jordan A., J. Aaron Hipp, Jacqueline Kerr, Todd S. Horowitz, and David Berrigan. "Unique Views on Obesity-Related Behaviors and Environments: Research Using Still and Video Images." Journal for the Measurement of Physical Behaviour 1, no. 3 (September 1, 2018): 143–54. http://dx.doi.org/10.1123/jmpb.2018-0021.

Full text
Abstract:
Objectives: To document challenges to and benefits from research involving the use of images by capturing examples of such research to assess physical activity– or nutrition-related behaviors and/or environments. Methods: Researchers (i.e., key informants) using image capture in their research were identified through knowledge and networks of the authors of this paper and through literature search. Twenty-nine key informants completed a survey covering the type of research, source of images, and challenges and benefits experienced, developed specifically for this study. Results: Most respondents used still images in their research, with only 26.7% using video. Image sources were categorized as participant generated (n = 13; e.g., participants using smartphones for dietary assessment), researcher generated (n = 10; e.g., wearable cameras with automatic image capture), or curated from third parties (n = 7; e.g., Google Street View). Two of the major challenges that emerged included the need for automated processing of large datasets (58.8%) and participant recruitment/compliance (41.2%). Benefit-related themes included greater perspectives on obesity with increased data coverage (34.6%) and improved accuracy of behavior and environment assessment (34.6%). Conclusions: Technological advances will support the increased use of images in the assessment of physical activity, nutrition behaviors, and environments. To advance this area of research, more effective collaborations are needed between health and computer scientists. In particular development of automated data extraction methods for diverse aspects of behavior, environment, and food characteristics are needed. Additionally, progress in standards for addressing ethical issues related to image capture for research purposes is critical.
APA, Harvard, Vancouver, ISO, and other styles
44

Grissom, Thomas E., Andrew DuKatz, Hubert A. Kordylewski, and Richard P. Dutton. "Bring Out Your Data." International Journal of Computational Models and Algorithms in Medicine 2, no. 2 (April 2011): 51–69. http://dx.doi.org/10.4018/jcmam.2011040104.

Full text
Abstract:
Recent healthcare legislation, financial pressures, and regulatory oversight have increased the need to create improved mechanisms for performance measurement, quality management tracking, and outcomes-based research. The Anesthesia Quality Institute (AQI) has established the National Anesthesia Clinical Outcomes Registry (NACOR) to support these requirements for a wide-range of customers including individual anesthesiologists, anesthesia practices, hospitals, and credentialing agencies. Concurrently, the availability of increased digital sources of healthcare data make it possible to capture massive quantities of data in a more efficient and cost-effective manner than ever before. With NACOR, AQI has established a user-friendly, automated process to effectively and efficiently collect a wide-range of anesthesia-related data directly from anesthesia practices. This review will examine the issues guiding the evolution of NACOR as well as some potential pitfalls in its growth and usage.
APA, Harvard, Vancouver, ISO, and other styles
45

Henriksen, Lars, Per Andresen, Brian Lauritzen, Kåre Jensen, Trine Juhl, Mikael Tranholm, and Peter Johansen. "Automated registration of tail bleeding in rats." Thrombosis and Haemostasis 99, no. 11 (2008): 956–62. http://dx.doi.org/10.1160/th07-12-0738.

Full text
Abstract:
SummaryAn automated system for registration of tail bleeding in rats using a camera and a user-designed PC-based software program has been developed. The live and processed images are displayed on the screen and are exported together with a text file for later statistical processing of the data allowing calculation of e.g. number of bleeding episodes, bleeding times and bleeding areas. Proof-of-principle was achieved when the camera captured the blood stream after infusion of rat whole blood into saline. Suitability was assessed by recording of bleeding profiles in heparintreated rats, demonstrating that the system was able to capture on/off bleedings and that the data transfer and analysis were conducted successfully. Then, bleeding profiles were visually recorded by two independent observers simultaneously with the automated recordings after tail transection in untreated rats. Linear relationships were found in the number of bleedings, demonstrating, however, a statistically significant difference in the recording of bleeding episodes between observers. Also, the bleeding time was longer for visual compared to automated recording. No correlation was found between blood loss and bleeding time in untreated rats, but in heparinized rats a correlation was suggested. Finally, the blood loss correlated with the automated recording of bleeding area. In conclusion, the automated system has proven suitable for replacing visual recordings of tail bleedings in rats. Inter-observer differences can be eliminated, monotonous repetitive work avoided, and a higher through-put of animals in less time achieved. The automated system will lead to an increased understanding of the nature of bleeding following tail transection in different rodent models.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Yuan, Linda F. Fried, Tassos C. Kyriakides, Gary R. Johnson, Susannah Chiu, Linda Mcdonald, and Jane H. Zhang. "Automated safety event monitoring using electronic medical records in a clinical trial setting: Validation study using the VA NEPHRON-D trial." Clinical Trials 16, no. 1 (November 16, 2018): 81–89. http://dx.doi.org/10.1177/1740774518813121.

Full text
Abstract:
Background/Aims: Electronic medical records are now frequently used for capturing patient-level data in clinical trials. Within the Veterans Affairs health care system, electronic medical record data have been widely used in clinical trials to assess eligibility, facilitate referrals for recruitment, and conduct follow-up and safety monitoring. Despite the potential for increased efficiency in using electronic medical records to capture safety data via a centralized algorithm, it is important to evaluate the integrity and accuracy of electronic medical record–captured data. To this end, this investigation assesses data collection, both for general and study-specific safety endpoints, by comparing electronic medical record–based safety monitoring versus safety data collected during the course of the Veterans Affairs Nephropathy in Diabetes (VA NEPHRON-D) clinical trial. Methods: The VA NEPHRON-D study was a multicenter, double-blind, randomized clinical trial designed to compare the effect of combination therapy (losartan plus lisinopril) versus monotherapy (losartan) on the progression of kidney disease in individuals with diabetes and proteinuria. The trial’s safety outcomes included serious adverse events, hyperkalemia, and acute kidney injury. A subset of the participants (~62%, n = 895) enrolled in the trial’s long-term follow-up sub-study and consented to electronic medical record data collection. We applied an automated algorithm to search and capture safety data using the VA Corporate Data Warehouse which houses electronic medical record data. Using study safety data reported during the trial as the gold standard, we evaluated the sensitivity and precision of electronic medical record–based safety data and related treatment effects. Results: The sensitivity of the electronic medical record–based safety for hospitalizations was 65.3% without non-VA hospitalization events and 92.3% with the non-VA hospitalization events included. The sensitivity was only 54.3% for acute kidney injury and 87.3% for hyperkalemia. The precision of electronic medical record–based safety data was 89.4%, 38%, and 63.2% for hospitalization, acute kidney injury, and hyperkalemia, respectively. Relative treatment differences under the study and electronic medical record settings were 15% and 3% for hospitalization, 123% and 29% for acute kidney injury, and 238% and 140% for hyperkalemia, respectively. Conclusion: The accuracy of using automated electronic medical record safety data depends on the events of interest. Identification of all-cause hospitalizations would be reliable if search methods could, in addition to VA hospitalizations, also capture non-VA hospitalizations. However, hospitalization is different from a cause-specific serious adverse event that could be more sensitive to treatment effects. In addition, some study-specific safety events were not easily identified using the electronic medical records. This limits the effectiveness of the automated central database search for purposes of safety monitoring. Hence, this data captured approach should be carefully considered when implementing endpoint data collection in future pragmatic trials.
APA, Harvard, Vancouver, ISO, and other styles
47

McGeorge, Nicolette M., Susan Latiff, Christopher Muller, Lucas Dong, Ceara Chewning, Daniela Friedson-Trujillo, and Stephanie Kane. "Design and Development of a Prototype Heads-Up Display: Supporting Context-Aware, Semi-Automated, Hands-Free Medical Documentation." Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 10, no. 1 (June 2021): 18–22. http://dx.doi.org/10.1177/2327857921101066.

Full text
Abstract:
Military and civilian medical personnel across all echelons of medical care play a critical role in evaluating, caring for, and treating casualties. Accurate medical documentation is critical to effective, coordinated care and positive patient outcomes. We describe our prototype, Context-Aware Procedure Support Tools and User Interfaces for Rapid and Effective Workflows (CAPTURE). Leveraging human factors and usercentered design methods, and advanced artificial intelligence and computer vision capabilities, CAPTURE was designed to enable Tactical Combat Causality Care (TCCC) providers to more efficiently and effectively input critical medical information through hands-free interaction techniques and semiautomated data capture methods. We designed and prototyped a heads-up display that incorporates: multimodal interfaces, including augmented reality-based methods for input and information display to support visual image capture and heads-up interaction; post-care documentation support (e.g., artifacts to support post-care review and documentation); context-aware active and passive data capture methods, specifically natural language interpretation using systemic functional grammars; and computer vision technologies for semi-automated data capture capabilities. During the course of this project we encountered challenges towards effective design which fall into three main categories: (1) challenges related to designing novel multimodal interfaces; (2) technical challenges related to software and hardware development to meet design needs; and (3) challenges as a result of domain characteristics and operational constraints. We discuss how we addressed some of these challenges and provide additional considerations necessary for future research regarding next generation technology design for medical documentation in the field.
APA, Harvard, Vancouver, ISO, and other styles
48

Shah, Neha, Osama Ummer, Kerry Scott, Jean Juste Harrisson Bashingwa, Nehru Penugonda, Arpita Chakraborty, Agrima Sahore, Diwakar Mohan, and Amnesty Elizabeth LeFevre. "SMS feedback system as a quality assurance mechanism: experience from a household survey in rural India." BMJ Global Health 6, Suppl 5 (July 2021): e005287. http://dx.doi.org/10.1136/bmjgh-2021-005287.

Full text
Abstract:
The increasing use of digital health solutions to support data capture both as part of routine delivery of health services and through special surveys presents unique opportunities to enhance quality assurance measures. This study aims to demonstrate the feasibility and acceptability of using back-end data analytics and machine learning to identify impediments in data quality and feedback issues requiring follow-up to field teams using automated short messaging service (SMS) text messages. Data were collected as part of a postpartum women’s survey (n=5095) in four districts of Madhya Pradesh, India, from October 2019 to February 2020. SMSs on common errors found in the data were sent to supervisors and coordinators. Before/after differences in time to correction of errors were examined, and qualitative interviews conducted with supervisors, coordinators, and enumerators. Study activities resulted in declines in the average number of errors per week after the implementation of automated feedback loops. Supervisors and coordinators found the direct format, complete information, and automated nature of feedback convenient to work with and valued the more rapid notification of errors. However, coordinators and supervisors reported preferring group WhatsApp messages as compared with individual SMSs to each supervisor/coordinator. In contrast, enumerators preferred the SMS system over in-person group meetings where data quality impediments were discussed. This study demonstrates that automated SMS feedback loops can be used to enhance survey data quality at minimal cost. Testing is needed among data capture applications in use by frontline health workers in India and elsewhere globally.
APA, Harvard, Vancouver, ISO, and other styles
49

Pérez, José Javier, María Senderos, Amaia Casado, and Iñigo Leon. "Field Work’s Optimization for the Digital Capture of Large University Campuses, Combining Various Techniques of Massive Point Capture." Buildings 12, no. 3 (March 18, 2022): 380. http://dx.doi.org/10.3390/buildings12030380.

Full text
Abstract:
The aim of the study is to obtain fast digitalization of large urban settings. The data of two university campuses in two cities in northern Spain was captured. Challenges were imposed by the lockdown situation caused by the COVID-19 pandemic, which limited mobility and affected the field work for data readings. The idea was to significantly reduce time spent in the field, using a number of resources, and increasing efficiency as economically as possible. The research design is based on the Design Science Research (DSR) concept as a methodological approach to design the solutions generated by means of 3D models. The digitalization of the campuses is based on the analysis, evolution and optimization of LiDAR ALS points clouds captured by government bodies, which are open access and free. Additional TLS capture techniques were used to complement the clouds, with the study of support of UAV-assisted automated photogrammetric techniques. The results show that with points clouds overlapped with 360 images, produced with a combination of resources and techniques, it was possible to reduce the on-site working time by more than two thirds.
APA, Harvard, Vancouver, ISO, and other styles
50

Flynn, Robert W. V., Thomas M. Macdonald, Nicola Schembri, Gordon D. Murray, and Alexander S. F. Doney. "Automated data capture from free-text radiology reports to enhance accuracy of hospital inpatient stroke codes." Pharmacoepidemiology and Drug Safety 19, no. 8 (July 2, 2010): 843–47. http://dx.doi.org/10.1002/pds.1981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography