Academic literature on the topic 'Manual data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Manual data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Manual data processing"

1

Melucci, Dora. "Manual Data Processing in Analytical Chemistry: Linear Calibration." Journal of Chemical Education 85, no. 10 (October 2008): 1346. http://dx.doi.org/10.1021/ed085p1346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Riestiawan and Indah Ariyati. "Komparasi Pengolahan Data Keuangan Manual Dengan Pengolahan Data Keuangan Menggunakan Zahir Accounting Versi 5.1." Journal of Students‘ Research in Computer Science 3, no. 1 (May 30, 2022): 89–98. http://dx.doi.org/10.31599/jsrcs.v3i1.1478.

Full text
Abstract:
The development of information technology has penetrated all areas of life in all parts of the world, which is triggered by the increasing complexity of business activities and the increasing need for financial information. Processing of financial data on CV. Akbar Motor is recorded manually using sheets of paper so that the recording still does not work optimally. In addition, manual data processing is quite time-consuming, labor-intensive, and large costs which causes late financial statements. Researchers process financial data manually then continue using Zahir Accouting Software Version 5.1 as a comparison. The research methods used in collecting data consist of observations, interviews and literature studies. Comparison between manual processing of financial data with Zahir Accounting version 5.1. aims to find out the financial condition of an enterprise in a certain period in the hope that it can be used in decision making quickly and accurately. Keywords: Comparative, Finance, Reports, Zahir Abstrak Perkembangan teknologi informasi telah merambah di segala bidang kehidupan di seluruh belahan penjuru dunia, yang dipicu dengan meningkatnya kompleksitas kegiatan usaha dan meningkatnya kebutuhan akan informasi keuangan. Pengolahan data keuangan pada CV. Akbar Motor dicatat secara manual menggunakan lembaran–lembaran kertas sehingga pencatatan masih belum bekerja secara maksimal. Selain itu pengolahan data secara manual cukup memakan waktu, tenaga, dan biaya besar yang menyebabkan terlambatnya laporan keuangan. Peneliti melakukan pengolahan data keuangan secara manual kemudian dilanjutkan dengan menggunakan Software Zahir Accouting Versi 5.1 sebagai komparasinya. Metode penelitian yang di gunakan dalam mengumpulkan data terdiri dari pengamatan, wawancara dan studi pustaka. Komparasi antara pengolahan data keuangan secara manual dengan Zahir Accounting versi 5.1. bertujuan untuk mengetahui kondisi keuangan suatu perusahaan ini dalam periode tertentu dengan harapan dapat di gunakan dalam pengambilan keputusan secara cepat tepat dan akurat Kata kunci: Keuangan, Komparasi, Laporan, Zahir
APA, Harvard, Vancouver, ISO, and other styles
3

Zhong, Li Juan, and Jing Lin Tong. "Research of Automatic Camshaft Detection and Data Processing Technology." Applied Mechanics and Materials 42 (November 2010): 404–7. http://dx.doi.org/10.4028/www.scientific.net/amm.42.404.

Full text
Abstract:
The traditional optical-mechanical measurement instrument as well as manual data processing methods has unable to meet the efficient and accurate detection requirements for camshaft. In this article, the principle of automatic camshaft detection technology and the control system’s structure are introduced. The data acquisition system independent of movement controller and the data processing method of "all the sensitive" and evaluation method are introduced sequent.
APA, Harvard, Vancouver, ISO, and other styles
4

Barfod, Adrian S., Léa Lévy, and Jakob Juul Larsen. "Automatic processing of time domain induced polarization data using supervised artificial neural networks." Geophysical Journal International 224, no. 1 (September 26, 2020): 312–25. http://dx.doi.org/10.1093/gji/ggaa460.

Full text
Abstract:
SUMMARY Processing of geophysical data is a time consuming task involving many different steps. One approach for accelerating and automating processing of geophysical data is to look towards machine learning (ML). ML encompasses a wide range of tools, which can be used to automate complicated and/or tedious tasks. We present strategies for automating the processing of time-domain induced polarization (IP) data using ML. An IP data set from Grindsted in Denmark is used to investigate the applicability of neural networks for processing such data. The Grindsted data set consists of eight profiles, with approximately 2000 data curves per profile, on average. Each curve needs to be processed, which, using the manual approach, can take 1–2 hr per profile. Around 20 per cent of the curves were manually processed and used to train and validate an artificial neural network. Once trained, the network could process all curves, in 6–15 s for each profile. The accuracy of the neural network, when considering the manual processing as a reference, is 90.8 per cent. At first, the network could not detect outlier curves, that is where entire chargeability curves were significantly different from their spatial neighbours. Therefore, an outlier curve detection algorithm was developed and implemented to work in tandem with the network. The automatic processing approach developed here, involving the neural network and the outlier curve detection, leads to similar inversion results as the manual processing, with the two significant advantages of reduced processing times and enhanced processing consistency.
APA, Harvard, Vancouver, ISO, and other styles
5

Altyntsev, Maxim, and Karkokli Hamid Majid Saber. "PECULIARITIES OF PRELIMINARY MOBILE LASER SCANNING DATA PROCESSING." Interexpo GEO-Siberia 1, no. 1 (2019): 239–48. http://dx.doi.org/10.33764/2618-981x-2019-1-1-239-248.

Full text
Abstract:
The goal of preliminary mobile laser scanning (MLS) data processing is generating a unified point cloud in a required coordinate system. During this processing calibration of 2D scanners and digital cameras, point cloud adjustment, data filtering such as removal of noise and remirror points. Currently huge amount of software is developed for solving these tasks, but a degree of their auto-mation differs. Depending on software, type of scanned area preliminary MLS data processing technique can differ. The analysis of carried out scanning results with the task of revealing their pe-culiarities, determination of the preliminary data processing order and deciding about necessity to accept additional manual procedures.
APA, Harvard, Vancouver, ISO, and other styles
6

Fang, Hai Yan, Guo Ping Zhang, Feng Gao, Xiao Ping Zhao, Peng Shen, and Shu Fang Wang. "Comparison of Auto and Manual Integration for Peptidomics Data Based on High Performance Liquid Chromatography Coupled with Mass Spectrometry." Advanced Materials Research 340 (September 2011): 266–72. http://dx.doi.org/10.4028/www.scientific.net/amr.340.266.

Full text
Abstract:
A growing number of literatures appealed the necessity to develop methods of data processing for peptidome profiling and analysis. Although some methods had been established, many of them focused on the development and application of auto integration softwares. In this work, we paid attention to comparison of auto integration by software and manual integration for peptidomics data based on high performance liquid chromatography coupled with mass spectrometry (HPLC-MS). Two data processing procedures, auto integration by XCMS and manual integration, were applied in processing of peptidomics data based on HPLC-MS from cerebral infarction and breast cancer patients blood samples, respectively. And, it was found that almost all peaks contained in chromatograms could be picked out by XCMS, but the areas of these peaks were greatly different from those given by manual integration. Furthermore, t-test (2-tailed) results of the two data processing procedures were also different and different potential biomarkers were obtained. The results of this work will provide helpful reference for data processing of peptidomics research.
APA, Harvard, Vancouver, ISO, and other styles
7

MOSKALEV, P. Y. "A MODERN APPROACH TO DATA PROCESSING OF PAST YEARS." Neft i gaz 6, no. 120 (April 15, 2020): 52–58. http://dx.doi.org/10.37878/2708-0080/2020-5.038.

Full text
Abstract:
This paper provides a brief description of the process and the results of reprocessing old seismic data of CDPP 2D, worked out in Soviet times and rewritten from old media immediately before reprocessing. Experience has shown that it is possible to obtain material of good quality, suitable not only for structural, but also for dynamic interpretation, with a more expressive reflection of the structure of deep horizons. Considerable efforts, including the cost of manual labor, required for such reprocessing, can pay off in the end with a good result.
APA, Harvard, Vancouver, ISO, and other styles
8

Ambarsari, Diah Ayu. "Sistem Informasi Pengolahan Data Nilai Siswa Berbasis Website Pada MTs Mishbahul Falah Batangan." Computer Science (CO-SCIENCE) 1, no. 1 (January 25, 2021): 44–52. http://dx.doi.org/10.31294/coscience.v1i1.190.

Full text
Abstract:
Madrasah Tsanawiyah Mishbahul Falah is an educational institution, but in the process of calculating grades it is still classified as manual because technically the subject teacher provides a list of student scores to the principal, then the class teacher copies the student's scores into a report card based on the value data. Assessing that processing is still manual makes the longer work done, the greater the error rate in processing. The purpose of this study is to build applications that can facilitate value processing using the waterfall method. The initial stage in this research is to collect data for analysis, system analysis which will be used includes analysis of hardware, software, users, technology, system design, database, interface, system implementation, and system testing. The results of this study are Web-Based Student Value Data Processing Applications. Applications that are made can store school data, including teachers, classes, subjects, students, academic years, and carry out the assessment process. This application is expected to be able to solve the problems of the teacher or principal in the process of processing student value data.
APA, Harvard, Vancouver, ISO, and other styles
9

Purwanto, Joko, and Renny Renny. "Perancangan Data Warehouse Rumah Sakit Berbasis Online Analytical Processing (OLAP)." Jurnal Teknologi Informasi dan Ilmu Komputer 8, no. 5 (October 21, 2021): 1077. http://dx.doi.org/10.25126/jtiik.2021854232.

Full text
Abstract:
<p class="BodyCxSpFirst">Pemanfaatan teknologi informasi sangat penting bagi rumah sakit, karena berpengaruh pula terhadap kualitas pelayanan kesehatan yang secara manual diubah menjadi digital dengan menggunakan teknologi informasi.Dalam penelitian ini penulis menggunakan metodologi <em>Nine step</em> sebagai acuan dalam merancang suatu <em>data warehouse</em><em>,</em> untuk pemodelan menggunakan skema konstelasi fakta dengan 3 tabel fakta dan 11 tabel dimensi. Perbedaan penelitian ini dengan penelitian sebelumnya terletak pada sumber data yang diekstrak langsung dari <em>database</em> SIMRS yang digunakan rumah sakit, sehingga tidak ada ekstraksi data secara manual.Penelitian ini bertujuan untuk menghasilkan desain data warehouse berbasis Online Analytical Processing (OLAP) sebagai sarana penunjang kualitas pelayanan kesehatan rumah sakit. OLAP yang dihasilkan akan berupa desain data warehouse dengan berbagai dimensi yang akan menghasilkan tampilan informasi berupa Chart maupun Grafik sehingga informasinya mudah dibaca dan dipahami oleh berbagai pihak.</p><p class="BodyCxSpFirst"> </p><p class="BodyCxSpFirst"><em><strong>Abtract</strong></em></p><p class="BodyCxSpFirst"><em>The use of information technology is very important for hospitals, because it also affects the quality of health services, which manualy changed to digital using information technology. In this study, the authors used the Nine step methodology as a reference in designing a data warehouse for modeling using a fact constellation schema with 3 fact tables and 11 dimension tables. the different in this study from previous research is that the data source was taken directly from the SIMRS database used by the hospital, so there is no manual data extraction.</em><em>The aim of this research is to be able to produce a Data Warehouse design based on Online Analytical Processing (OLAP) as a means of supporting the quality of hospital health services. The resulting OLAP will be a data warehouse design with various dimensions will produce the displays information in the form of a graph or chart so that the information is easy to read and understand by various parties.</em></p><p class="BodyCxSpLast"><em> </em></p><p class="BodyCxSpFirst"><em><strong><br /></strong></em></p>
APA, Harvard, Vancouver, ISO, and other styles
10

Kostyukova, Elena Ivanovna, Victoria Samvelovna Germanova, and Alexander Vitalyevich Frolov. "Digitalization of accounting as a result of automated data processing." Buhuchet v sel'skom hozjajstve (Accounting in Agriculture), no. 10 (October 1, 2020): 24–31. http://dx.doi.org/10.33920/sel-11-2010-02.

Full text
Abstract:
Today, one of the priorities of international economic development is digitalization. It makes changes in all areas of our life, and the transformation of the economy, based on the drivers of information development, determines the importance of updating the information environ- ment of the new economy, which directly affects accounting. Currently, accounting is in a phase of gradual development and introduction of new technologies. This aspect is particularly important in the context of the rapid development of information and communication technologies and global digitalization, especially when translating accounting into a new information environment, defining its boundaries, conceptual scope, and confirming the self-sufficiency of accounting as a type of socio-economic and management practice. New requirements for employees and their qualifications are also being introduced. The automated form of accounting differs from the manual form in the uniform execution of operations. When a computer processes similar accounting operations, the same commands are used, virtually eliminating the occurrence of random errors, and the potential for control by the management organization is increased. Because computer programs provide the administration with a fairly wide range of tools that allow you to monitor and evaluate the company’s activities. Some mistakes made by the accountant of the manual accounting course are almost inevitable, since it is the human factor that plays a big role in them, but the use of an automated accounting form allows you to recognize and correct these errors in a timely manner.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Manual data processing"

1

Einstein, Noah. "SmartHub: Manual Wheelchair Data Extraction and Processing Device." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555352793977171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Keefer, Brenton Jan. "Effect of manual digitizing error on the accuracy and precision of polygon area and line length." Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/45930.

Full text
Abstract:

Manual digitizing has been recognized by investigators as a significant source of map error in GIS, but the error characteristics have not been well defined. This thesis presents a methodology for simulating manual digitizing error. Stream mode digitizing error was modeled using autoregressive moving average (ARMA) procedures, and point mode digitizing was stochastically simulated using an uniform random model. These models were developed based on quantification of digitizing error collected from several operators. The resulting models were used to evaluate the effect digitizing error had upon polygon size and total line length at varying map accuracy standards.

Digitizing error produced no bias in polygon area. The standard deviation of polygon area doubled as the accuracy standard bandwidth doubled, but the standard deviation was always less than 1.6 percent of total area for stream mode digitizing. Smaller polygons (less than 10 square map inches) had more bias and more variance relative to their size than larger polygons. A doubling of the accuracy standard bandwidth caused a quadrupling of line length bias and a doubling to tripling of the line length standard deviation. For stream mode digitizing, reasonable digitizing standards produced line length biases of less than 2 percent of total length and standard deviations of less than 1 percent of total length. Bias and standard deviation both increased with increasing line length (or number of points), but the bias and standard deviation as a percent of total line length remained constant as feature size changed.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Matsson, Erik, Gustav Dahllöf, and Julius Nilsson. "Business to Business - Electronic Invoice Processing : A report on the challenges, solutions and outcomes for companies switching from manual to electronic invoice handling." Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-26793.

Full text
Abstract:
Electronic document handling was first used in the automotive industry in the early 1970’s, the way of communicating electronic at the time was concerned with the communication way of EDI (Hsieh, 2004). In the beginning of 2000 a new way of communicating electronic documents was introduced with the emergence of VAN-operators (Hsieh, 2004). This technology of communicating electronic invoices has shown to be less complex for the businesses than the previous EDI connections. The VAN-operators enable companies regardless of size, ERP, also known as Enterprise Resource Planning, system, formats or transaction volume to send and receive electronic invoices. The subject of electronic invoice handling have become increasingly debated, mainly because of the legislations taking place all over Europe, and as well as the environmental impact by business transactions being sent by paper. The objective of this thesis is to examine the challenges, solutions and outcomes for companies switching to electronic invoice handling. The data collected for the thesis is divided into two parts. The first part consist of information retrieved by previous literature as well as internet sources. The second part concerns the case studies conducted for the thesis in respect to our research questions. For this reason Scandinavian companies have been interviewed, with different precondition as in size, industry, transaction volume and IT structure. The findings from the first and second part have been analyzed and conclusion have been made, we suggest using a VAN-operators, which have shown to be the most appropriate alternative for companies that are implementing electronic invoice handling. The result of this thesis can be used as a guideline for companies when considering a switch from manual to electronic invoice handling.
APA, Harvard, Vancouver, ISO, and other styles
4

Ward, Gary Ray. "Training the trainer: A manual for Kaiser Permanente educators who teach employees to use computer systems." CSUSB ScholarWorks, 1991. https://scholarworks.lib.csusb.edu/etd-project/758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meilinger, Manuel [Verfasser]. "Metal artifact reduction and image processing of cone-beam computed tomography data for mobile C-arm CT devices / Manuel Meilinger." Regensburg : Univ.-Verl. Regensburg, 2011. http://d-nb.info/1012922480/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pike, Elizabeth G. "El Salvador : modern technology and poverty." Virtual Press, 2007. http://liblink.bsu.edu/uhtbin/catkey/1365522.

Full text
Abstract:
The school, Centro Escolar Dr. Manuel Parada Salgaco, in Santa Ana, El Salvador, has been given a second chance. The Organization of Developing States funded a Solar Panel Net Project for the school, allotting it 36 solar panels to power 10 computers and a high-speed satellite Internet connection on the school campus. I became curious to learn, through documented interviews, how much this computer lab has contributed to the education of the poor, rural children. In this particular situation it appears that no longer are the developed world and the developing world separated due to lack of access to information.This exploration of computer and media use in impoverished areas explain and examine how the Internet and media are having positive effects on the students and teachers. This technology has opened the student's eyes to higher aspirations for themselves, as well as furthering their education. This documentary hopefully will be used as a tool to gain more funds, grants and volunteers to help the children select another type of life than just poverty.The principal, four students, four teachers, as well as Peace Corps volunteers, were interviewed. All interviews were documented on videotape. It was clear after this process that access to Internet technology does have positive effects on young students. And although the documentary does not suggest too many negatives, I cover those in this paper. I also hope the documentary can be used to help the school and future schools become as fortunate.
Department of Telecommunications
APA, Harvard, Vancouver, ISO, and other styles
7

Sheer, Paul. "A software assistant for manual stereo photometrology." Thesis, 1997. http://hdl.handle.net/10539/22434.

Full text
Abstract:
A dissertation submitted to the Faculty of Engineering, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the degree of master of Science in Engineering.
A software package was written under the X Window System, Version 11, to assist in manual stereopsis of multiple views. The package enables multiple high resolution (2000 by 1500 pixels and higher) black and white photographs to be viewed simultaneously. Images have adjustable zoom windows which can be manipulated with the pointing device. The zoom windows enlarge to many times the resolution of the image enabling sub-pixel measurements to be extrapolated by the operator. A user-friendly interface allows for fast pinhole camera calibration (from known 3D calibration points) and enables three dimensional lines, circles, grids, cylinders and planes to be fitted to markers specified by the user. These geometric objects are automatically rendered in 3D for comparison with the images. The camera calibration is performed using an iterative optimisation algorithm which also tries multiple combinations of omitted calibration points. This allows for some fault tolerance of the algorithm with respect to erroneous calibration points. Vector mathematics for the geometrical fits is derived. The calibration is shown to converge on a variety of photographs from actual plant surveys. In an artificial test on an array of constructed 3D coordinate markers, absolute accuracy was found to be 1 mm (standard deviation of the Euclidean error) for a distance of 2.5 meters from a standard 35 mm camera. This translates to an error of 1.6 pixels in the scanned views. Lens distortion was assumed to be negligible, except for aspect ratio distortion which was calibrated for. Finally. to demonstrate the efficacy of the package, a 3D model was reconstructed from ten photographs of a human face, taken from different angles.
AC2017
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Manual data processing"

1

Jenkins, George Henry. Data processing policies and procedures manual. Englewood Cliffs, N.J: Prentice Hall, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kennelly, Maureen. Inverted echo sounder data processing manual. Narragansett, R.I: University of Rhode Island, Graduate School of Oceanography, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

CDP review manual: A data processing handbook. 4th ed. New York: Van Nostrand Reinhold, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Montana Natural Resource Information System. Montana data directory user's manual. [Helena, MT]: Montana State Library, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Donovan, K. G. The info. tech. manual. Lancaster: Framework, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Caruso, Michael. Satellite data processing system (SDPS) users manual V1.0. Woods Hole, Mass: Woods Hole Oceanographic Institution, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Marshall, Ron. Graphing calculator manual for precalculus. Fort Worth: Saunders College Pub., 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bruyn, Gertrud. MARC coding manual. Waterloo, Ont: Ontario Library Services Center, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

French, C. S. Computer studies: An instructional manual. 2nd ed. Eastleigh: D.P., 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lord, Kenniston W. ACP review manual: A data processing career begins : an examination review manual for the associate computer professional examination. New York: Van Nostrand Reinhold, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Manual data processing"

1

Li, Zhenlong, Zhipeng Gui, Barbara Hofer, Yan Li, Simon Scheider, and Shashi Shekhar. "Geospatial Information Processing Technologies." In Manual of Digital Earth, 191–227. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9915-3_6.

Full text
Abstract:
Abstract The increasing availability of geospatial data offers great opportunities for advancing scientific discovery and practices in society. Effective and efficient processing of geospatial data is essential for a wide range of Digital Earth applications such as climate change, natural hazard prediction and mitigation, and public health. However, the massive volume, heterogeneous, and distributed nature of global geospatial data pose challenges in geospatial information processing and computing. This chapter introduces three technologies for geospatial data processing: high-performance computing, online geoprocessing, and distributed geoprocessing, with each technology addressing one aspect of the challenges. The fundamental concepts, principles, and key techniques of the three technologies are elaborated in detail, followed by examples of applications and research directions in the context of Digital Earth. Lastly, a Digital Earth reference framework called discrete global grid system (DGGS) is discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yun, Manzhu Yu, Mengchao Xu, Jingchao Yang, Dexuan Sha, Qian Liu, and Chaowei Yang. "Big Data and Cloud Computing." In Manual of Digital Earth, 325–55. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9915-3_9.

Full text
Abstract:
Abstract Big data emerged as a new paradigm to provide unprecedented content and value for Digital Earth. Big Earth data are increasing tremendously with growing heterogeneity, posing grand challenges for the data management lifecycle of storage, processing, analytics, visualization, sharing, and applications. During the same time frame, cloud computing emerged to provide crucial computing support to address these challenges. This chapter introduces Digital Earth data sources, analytical methods, and architecture for data analysis and describes how cloud computing supports big data processing in the context of Digital Earth.
APA, Harvard, Vancouver, ISO, and other styles
3

Poirier, E. J., and D. R. Poirier. "Correlations and Data for Heat Transfer Coefficients." In Solutions Manual To accompany Transport Phenomena in Materials Processing, 129–57. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-65130-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mohamed-Ghouse, Zaffar Sadiq, Cheryl Desha, and Luis Perez-Mora. "Digital Earth in Australia." In Manual of Digital Earth, 683–711. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9915-3_21.

Full text
Abstract:
Abstract Australia must overcome a number of challenges to meet the needs of our growing population in a time of increased climate variability. Fortunately, we have unprecedented access to data about our land and the built environment that is internationally regarded for its quality. Over the last two decades Australia has risen to the forefront in developing and implementing Digital Earth concepts, with several key national initiatives formalising our digital geospatial journey in digital globes, open data access and ensuring data quality. In particular and in part driven by a lack of substantial resources in space, we have directed efforts towards world-leading innovation in big data processing and storage. This chapter highlights these geospatial initiatives, including case-uses, lessons learned, and next steps for Australia. Initiatives addressed include the National Data Grid (NDG), the Queensland Globe, G20 Globe, NSW Live (formerly NSW Globe), Geoscape, the National Map, the Australian Geoscience Data Cube and Digital Earth Australia. We explore several use cases and conclude by considering lessons learned that are transferrable for our colleagues internationally. This includes challenges in: 1) Creating an active context for data use, 2) Capacity building beyond ‘show-and-tell’, and 3) Defining the job market and demand for the market.
APA, Harvard, Vancouver, ISO, and other styles
5

Marconcini, Mattia, Thomas Esch, Felix Bachofer, and Annekatrin Metz-Marconcini. "Digital Earth in Europe." In Manual of Digital Earth, 647–81. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9915-3_20.

Full text
Abstract:
Abstract In recent years, with the advancements in technology and research as well as changes in society, Digital Earth transformed. It evolved from its original concept of a 3D multilayer representation of our planet into a more practical system design to fulfil the demand for information sharing, which now embraces fields such as global climate change, food security and natural disaster prevention. In this novel scenario, Europe has become one of the major players at the global level; accordingly, the goal of this chapter is to provide a general overview of the major European contributions to the overall objectives of Digital Earth. These include the establishment of a European spatial data infrastructure through the Infrastructure for Spatial Information in Europe (INSPIRE) directive, the initiation of the Galileo and Copernicus programs that provide a wealth of big data from space, the launch of novel cloud-based platforms for data processing and integration and the emergence of citizen science. An outlook on major upcoming initiatives is also provided.
APA, Harvard, Vancouver, ISO, and other styles
6

Post, Ruben, Iris Beerepoot, Xixi Lu, Stijn Kas, Sebastiaan Wiewel, Angelique Koopman, and Hajo Reijers. "Active Anomaly Detection for Key Item Selection in Process Auditing." In Lecture Notes in Business Information Processing, 167–79. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_13.

Full text
Abstract:
AbstractProcess mining allows auditors to retrieve crucial information about transactions by analysing the process data of a client. We propose an approach that supports the identification of unusual or unexpected transactions, also referred to as exceptions. These exceptions can be selected by auditors as “key items”, meaning the auditors wants to look further into the underlying documentation of the transaction. The approach encodes the traces, assigns an anomaly score to each trace, and uses the domain knowledge of auditors to update the assigned anomaly scores through active anomaly detection. The approach is evaluated with three groups of auditors over three cycles. The results of the evaluation indicate that the approach has the potential to support the decision-making process of auditors. Although auditors still need to make a manual selection of key items, they are able to better substantiate this selection. As such, our research can be seen as a step forward with respect to the usage of anomaly detection and data analysis in process auditing.
APA, Harvard, Vancouver, ISO, and other styles
7

Bryant, Cody, Nicole Schoenstein, Susan Schuh, and David Meza. "Comparing Automated vs. Manual Data Analytic Processing of Long Duration International Space Station Post Mission Crew Feedback." In Advances in Intelligent Systems and Computing, 215–28. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93885-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sallin, Marc, Martin Kropp, Craig Anslow, James W. Quilty, and Andreas Meier. "Measuring Software Delivery Performance Using the Four Key Metrics of DevOps." In Lecture Notes in Business Information Processing, 103–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78098-2_7.

Full text
Abstract:
Abstract The Four Key Metrics of DevOps have become very popular for measuring IT-performance and DevOps adoption. However, the measurement of the four metrics deployment frequency, lead time for change, time to restore service and change failure rate is often done manually and through surveys - with only few data points. In this work we evaluated how the Four Key Metrics can be measured automatically and developed a prototype for the automatic measurement of the Four Key Metrics. We then evaluated if the measurement is valuable for practitioners in a company. The analysis shows that the chosen measurement approach is both suitable and the results valuable for the team with respect to measuring and improving the software delivery performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Abdulrazak, Bessam, Suvrojoti Paul, Souhail Maraoui, Amin Rezaei, and Tianqi Xiao. "IoT Architecture with Plug and Play for Fast Deployment and System Reliability: AMI Platform." In Lecture Notes in Computer Science, 43–57. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09593-1_4.

Full text
Abstract:
AbstractThe rapid advancement of the Internet of Things (IoT) has reshaped the industrial system, agricultural system, healthcare systems, and even our daily livelihoods, as the number of IoT applications is surging in these fields. Still, numerous challenges are imposed when putting in place such technology at large scale. In a system of millions of connected devices, operating each one of them manually is impossible, making IoT platforms unmaintainable. In this study, we present our attempt to achieve the autonomy of IoT infrastructure by building a platform that targets a dynamic and quick Plug and Play (PnP) deployment of the system at any given location, using predefined pipelines. The platform also supports real-time data processing, which enables the users to have reliable and real-time data visualization in a dynamic dashboard.
APA, Harvard, Vancouver, ISO, and other styles
10

Choiński, Mateusz, Mateusz Rogowski, Piotr Tynecki, Dries P. J. Kuijper, Marcin Churski, and Jakub W. Bubnicki. "A First Step Towards Automated Species Recognition from Camera Trap Images of Mammals Using AI in a European Temperate Forest." In Computer Information Systems and Industrial Management, 299–310. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-84340-3_24.

Full text
Abstract:
AbstractCamera traps are used worldwide to monitor wildlife. Despite the increasing availability of Deep Learning (DL) models, the effective usage of this technology to support wildlife monitoring is limited. This is mainly due to the complexity of DL technology and high computing requirements. This paper presents the implementation of the light-weight and state-of-the-art YOLOv5 architecture for automated labeling of camera trap images of mammals in the Białowieża Forest (BF), Poland. The camera trapping data were organized and harmonized using TRAPPER software, an open-source application for managing large-scale wildlife monitoring projects. The proposed image recognition pipeline achieved an average accuracy of 85% F1-score in the identification of the 12 most commonly occurring medium-size and large mammal species in BF, using a limited set of training and testing data (a total of 2659 images with animals).Based on the preliminary results, we have concluded that the YOLOv5 object detection and classification model is a fine and promising DL solution after the adoption of the transfer learning technique. It can be efficiently plugged in via an API into existing web-based camera trapping data processing platforms such as e.g. TRAPPER system. Since TRAPPER is already used to manage and classify (manually) camera trapping datasets by many research groups in Europe, the implementation of AI-based automated species classification will significantly speed up the data processing workflow and thus better support data-driven wildlife monitoring and conservation. Moreover, YOLOv5 has been proven to perform well on edge devices, which may open a new chapter in animal population monitoring in real-time directly from camera trap devices.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Manual data processing"

1

Hula, Jan, David Mojzisek, David Adamczyk, and Radek Cech. "Acquiring Custom OCR System with Minimal Manual Annotation." In 2020 IEEE Third International Conference on Data Stream Mining & Processing (DSMP). IEEE, 2020. http://dx.doi.org/10.1109/dsmp47368.2020.9204229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pasini, Tommaso, and Roberto Navigli. "Train-O-Matic: Large-Scale Supervised Word Sense Disambiguation in Multiple Languages without Manual Training Data." In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/d17-1008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Oulman, Spencer. "Oil in Water Monitoring – Data Resolution." In International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22534-ea.

Full text
Abstract:
Abstract Water processing is a widespread challenge for oil production. In many areas of the world, access to water is becoming scarce and cost of water processing is increasing. With aging reservoirs, larger water cuts are expected. The oil production from wells becomes more challenging as larger percentages of water are recovered from production. Added chemistry and extra mechanical work on the fluids become necessary to recover the desired oil. Further downstream of the oil separation, additional work on produced water becomes necessary to further recover oil. Later in the process as the produced water is prepared for disposal or reuse, the trace oil is monitored for recovery to further increase production gains and avoid added costs for disposal or premature fouling of reservoirs in secondary and tertiary enhanced oil recovery operations. To control the process, oil in water concentration is monitored. The traditional method of obtaining oil in water concentrations is from manual grab samples. These samples are then analyzed predominantly by gravimetric or photometric methods in a lab. An alternate method for obtaining the process results uses online monitoring that is directly connected to the process for high frequency sampling and measurements. The use of oil in water online monitoring has been commercially available for a few decades. Online monitoring has been gaining traction as the cost of the technology becomes economically viable. The study will focus strictly on the resolution of data to show detail missed by traditional manual grab samples. This study and results will not focus on comparisons of accuracy of measurement, maintenance, and calibration requirements of online monitoring. An onshore producer in the US studied a production process using waterflood injection. Over the 15 days of study, manual grab samples and online sampling were performed. Process changes will be highlighted across the study, showcasing the importance of resolution.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, P. K. K., J. Chin, C. Ng, and R. Tsui. "Digital Solutions to Improve Workflows of 3D Ground Modelling." In The HKIE Geotechnical Division 42nd Annual Seminar. AIJR Publisher, 2022. http://dx.doi.org/10.21467/proceedings.133.12.

Full text
Abstract:
3D ground modelling often starts with importing digitised ground investigation (GI) data into modelling software. This first step is very vital for further ground interpretation with meaningful result. Since the invention of digitised GI data, any data obtained on site can be electronically transferred by adopting the AGS format (*.AGS). To utilise any digital GI data for this purpose, engineering geologists must go through manual data clean up to suit the import format of modelling software. Otherwise, details will be lost such that risks could potentially be overlooked in the interpretation of the data. Aurecon has developed a new tool specifically to automate the manual process to restructure any AGS data, streamlining the process of 3D ground modelling. After any AGS files are processed by this tool, the likelihood of overlooking any details or important information has been greatly minimized. From our experience, the time saving between using this tool and manually processing digital data to build up a 3D ground model is often more than 50%. This paper will first discuss challenges of 3D ground modelling from AGS data, followed by discussion on preferred data structure for ground modelling and capabilities of the tool to overcome these challenges.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Wentai, Quan Chen, Can Koz, Liuyue Xie, Amit Regmi, Soji Yamakawa, Tomotake Furuhata, Kenji Shimada, and Levent Burak Kara. "Data Augmentation of Engineering Drawings for Data-Driven Component Segmentation." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-91043.

Full text
Abstract:
Abstract We present a new data generation method to facilitate an automatic machine interpretation of 2D engineering part drawings. While such drawings are a common medium for clients to encode design and manufacturing requirements, a lack of computer support to automatically interpret these drawings necessitates part manufacturers to resort to laborious manual approaches for interpretation which, in turn, severely limits processing capacity. Although recent advances in trainable computer vision methods may enable automatic machine interpretation, it remains challenging to apply such methods to engineering drawings due to a lack of labeled training data. As one step toward this challenge, we propose a constrained data synthesis method to generate an arbitrarily large set of synthetic training drawings using only a handful of labeled examples. Our method is based on the randomization of the dimension sets subject to two major constraints to ensure the validity of the synthetic drawings. The effectiveness of our method is demonstrated in the context of a binary component segmentation task with a proposed list of descriptors. An evaluation of several image segmentation methods trained on our synthetic dataset shows that our approach to new data generation can boost the segmentation accuracy and the generalizability of the machine learning models to unseen drawings.
APA, Harvard, Vancouver, ISO, and other styles
6

Aleem, Sidra, Teerath Kumar, Suzanne Little, Malika Bendechache, Rob Brennan, and Kevin McGuinness. "Random Data Augmentation based Enhancement: A Generalized Enhancement Approach for Medical Datasets." In 24th Irish Machine Vision and Image Processing Conference. Irish Pattern Recognition and Classification Society, 2022. http://dx.doi.org/10.56541/fumf3414.

Full text
Abstract:
Over the years, the paradigm of medical image analysis has shifted from manual expertise to automated systems, often using deep learning (DL) systems. The performance of deep learning algorithms is highly dependent on data quality. Particularly for the medical domain, it is an important aspect as medical data is very sensitive to quality and poor quality can lead to misdiagnosis. To improve the diagnostic performance, research has been done both in complex DL architectures and in improving data quality using dataset dependent static hyperparameters. However, the performance is still constrained due to data quality and overfitting of hyperparameters to a specific dataset. To overcome these issues, this paper proposes random data augmentation based enhancement. The main objective is to develop a generalized, data-independent and computationally efficient enhancement approach to improve medical data quality for DL. The quality is enhanced by improving the brightness and contrast of images. In contrast to the existing methods, our method generates enhancement hyperparameters randomly within a defined range, which makes it robust and prevents overfitting to a specific dataset. To evaluate the generalization of the proposed method, we use four medical datasets and compare its performance with state-of-the-art methods for both classification and segmentation tasks. For grayscale imagery, experiments have been performed with: COVID-19 chest X-ray, KiTS19, and for RGB imagery with: LC25000 datasets. Experimental results demonstrate that with the proposed enhancement methodology, DL architectures outperform other existing methods. Our code is publicly available at: https://github.com/aleemsidra/Augmentation-Based-Generalized-Enhancement.
APA, Harvard, Vancouver, ISO, and other styles
7

Dini, Said, Mohammad Khosrowjerdi, and James Aflaki. "Heat Pump Experiment With a Computer Interface for Control, Data Acquisition, and Analysis." In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/cie-34408.

Full text
Abstract:
This paper describes an effective, but simple, technique using a computer interface for control, data acquisition, and processing of a heat pump laboratory experiment. A water-to-air heat pump that allows comfort cooling and heating from a single source is used as an experiment and will be incorporated in a Mechanical Engineering Laboratory Course. Presently, the source is the city water. Plans are in place to use a ground source that provides a relatively constant temperature water supply, as low as 45°F. This well-instrumented laboratory teaching equipment allows students to measure temperatures, pressures, flow rate, and power input and then calculate the coefficient of performance of the system and the efficiency of the compressor both manually and automatically. A self-contained Windows-based data collection and analysis system has been developed for automating all the manual functions of a WPH-J Series Water-to-Air Heat Pump from Heat Controller, Inc. This system uses a data acquisition board to read the voltage signals corresponding to 9 T-type thermocouples, three pressure gauges, and compressor supplied power. The data acquisition and control software written in Visual Basic 6 uses 32-bit libraries to control the operation mode, read the thermocouples’ voltages, water flow rate, compressor’s input and output pressure, and supplied power.
APA, Harvard, Vancouver, ISO, and other styles
8

Swofford, Rodney W. "Bin Automation and Management." In ASME 2005 Citrus Engineering Conference. American Society of Mechanical Engineers, 2005. http://dx.doi.org/10.1115/cec2005-5104.

Full text
Abstract:
Existing fruit receiving and processing operations rely on both automated and manual data collection systems to generate and store fruit information. This information can be used to select fruit to use for a particular juice product or to optimize extraction systems. At the present time, the selection of fruit is mainly a manual process due to disconnected, dissimilar data systems and missing information. This paper describes the development of an integrated bin management system. This system uses existing customer data collection systems with new machinery and instrumentation to improve the accuracy of fruit selection and blending to optimize the extraction process. The following areas will be examined: • A review of a conventional, manual bin system • The need for an integrated data collection and selection system • An overview of the intelligent bin system • How the intelligent bin system collects and manages data • An overview of the bin selection system • Some future developments Paper published with permission.
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Brian, Evan Gunnell, and Yu Sun. "Smart Tab Predictor: A Chrome Extension to Assist Browser Task Management using Machine Learning and Data Analysis." In 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112318.

Full text
Abstract:
The outbreak of the Covid 19 pandemic has forced most schools and businesses to use digital learning and working. Many people have repetitive web browsing activities or encounter too many open tabs causing slowness in surfing the websites. This paper presents a tab predictor application, a Chrome browser extension that uses Machine Learning (ML) to predict the next URL to open based on the time and frequency of current and previous tabs. Nowadays, AI technology has expanded in people’s daily lives like self-driving cars and assistive-type robots. The AI ML module in our application is more basic and is built using Python and Scikit-Learn (Sklearn) machine learning libraries. We use JavaScript and Chrome API to collect the browser tab data and store it in a Firebase Cloud Firestore. The ML module then loads data from the Firebase, trains datasets to adapt to a user’s patterns, and predicts URLs to recommend opening new URLs. For Machine Learning, we compare three ML models and select the Random Forest Classifier. We also apply SMOTE (Synthetic Minority Oversampling Technique) to make the data-set more balanced, thus improving the prediction accuracy. Both manual tests and Cross Validation are performed to verify the predicted URLs. As a result, using the Smart Tab Predictor application will help students and business workers manage the web browser tabs more efficiently in their daily routine for online classes, online meetings, and other websites.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Sunghan, Mingyu Kim, Jeongtae Lee, Jinhwi Pyo, Heeyoung Heo, Dongho Yun, and Kwanghee Ko. "Registration of 3D Point Clouds for Ship Block Measurement." In SNAME 5th World Maritime Technology Conference. SNAME, 2015. http://dx.doi.org/10.5957/wmtc-2015-252.

Full text
Abstract:
In this paper, a software system for registration of point clouds is developed. The system consists of two modules for registration and user interaction. The registration module contains functions for manual and automatic registration. The manual method allows a user to select feature points or planes from the point clouds manually. The selected planes or features are then processed to establish the correspondence between the point clouds, and registration is performed to obtain one large point cloud. The automatic registration uses sphere targets. Sphere targets are attached to an object of interest. A scanner measures the object as well as the targets to produce point clouds, from which the targets are extracted using shape intrinsic properties. Then correspondence between the point clouds is obtained using the targets, and the registration is performed. The user interaction module provides a GUI environment which allows a user to navigate point clouds, to compute various features, to visualize point clouds and to select/unselect points interactively and the point-processing unit containing functions for filtering, estimation of geometric features, and various data structures for managing point clouds of large size. The developed system is tested with actual measurement data of various blocks in a shipyard.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Manual data processing"

1

Kennelly, Maureen, Karen Tracey, and D. R. Watts. Inverted Echo Sounder Data Processing Manual. Fort Belvoir, VA: Defense Technical Information Center, June 2007. http://dx.doi.org/10.21236/ada477328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Adams, Edward L. DESIM data manual: a procedural guide for developing equipment processing and down time data. Broomall, PA: U.S. Department of Agriculture, Forest Service, Northeastern Forest Experimental Station, 1985. http://dx.doi.org/10.2737/ne-gtr-102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neeley, Aimee, Stace E. Beaulieu, Chris Proctor, Ivona Cetinić, Joe Futrelle, Inia Soto Ramos, Heidi M. Sosik, et al. Standards and practices for reporting plankton and other particle observations from images. Woods Hole Oceanographic Institution, July 2021. http://dx.doi.org/10.1575/1912/27377.

Full text
Abstract:
This technical manual guides the user through the process of creating a data table for the submission of taxonomic and morphological information for plankton and other particles from images to a repository. Guidance is provided to produce documentation that should accompany the submission of plankton and other particle data to a repository, describes data collection and processing techniques, and outlines the creation of a data file. Field names include scientificName that represents the lowest level taxonomic classification (e.g., genus if not certain of species, family if not certain of genus) and scientificNameID, the unique identifier from a reference database such as the World Register of Marine Species or AlgaeBase. The data table described here includes the field names associatedMedia, scientificName/ scientificNameID for both automated and manual identification, biovolume, area_cross_section, length_representation and width_representation. Additional steps that instruct the user on how to format their data for a submission to the Ocean Biodiversity Information System (OBIS) are also included. Examples of documentation and data files are provided for the user to follow. The documentation requirements and data table format are approved by both NASA’s SeaWiFS Bio-optical Archive and Storage System (SeaBASS) and the National Science Foundation’s Biological and Chemical Oceanography Data Management Office (BCO-DMO).
APA, Harvard, Vancouver, ISO, and other styles
4

Bates, C. Richards, Melanie Chocholek, Clive Fox, John Howe, and Neil Jones. Scottish Inshore Fisheries Integrated Data System (SIFIDS): Work package (3) final report development of a novel, automated mechanism for the collection of scallop stock data. Edited by Mark James and Hannah Ladd-Jones. Marine Alliance for Science and Technology for Scotland (MASTS), 2019. http://dx.doi.org/10.15664/10023.23449.

Full text
Abstract:
[Extract from Executive Summary] This project, aimed at the development of a novel, automated mechanism for the collection of scallop stock data was a sub-part of the Scottish Inshore Fisheries Integrated Data Systems (SIFIDS) project. The project reviewed the state-of-the-art remote sensing (geophysical and camera-based) technologies available from industry and compared these to inexpensive, off-the -shelf equipment. Sea trials were conducted on scallop dredge sites and also hand-dived scallop sites. Data was analysed manually, and tests conducted with automated processing methods. It was concluded that geophysical acoustic technologies cannot presently detect individual scallop but the remote sensing technologies can be used for broad scale habitat mapping of scallop harvest areas. Further, the techniques allow for monitoring these areas in terms of scallop dredging impact. Camera (video and still) imagery is effective for scallop count and provide data that compares favourably with diver-based ground truth information for recording scallop density. Deployment of cameras is possible through inexpensive drop-down camera frames which it is recommended be deployed on a wide area basis for further trials. In addition, implementation of a ‘citizen science’ approach to wide area recording is suggested to increase the stock assessment across the widest possible variety of seafloor types around Scotland. Armed with such data a full, statistical analysis could be completed and data used with automated processing routines for future long-term monitoring of stock.
APA, Harvard, Vancouver, ISO, and other styles
5

Lasko, Kristofer, and Sean Griffin. Monitoring Ecological Restoration with Imagery Tools (MERIT) : Python-based decision support tools integrated into ArcGIS for satellite and UAS image processing, analysis, and classification. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40262.

Full text
Abstract:
Monitoring the impacts of ecosystem restoration strategies requires both short-term and long-term land surface monitoring. The combined use of unmanned aerial systems (UAS) and satellite imagery enable effective landscape and natural resource management. However, processing, analyzing, and creating derivative imagery products can be time consuming, manually intensive, and cost prohibitive. In order to provide fast, accurate, and standardized UAS and satellite imagery processing, we have developed a suite of easy-to-use tools integrated into the graphical user interface (GUI) of ArcMap and ArcGIS Pro as well as open-source solutions using NodeOpenDroneMap. We built the Monitoring Ecological Restoration with Imagery Tools (MERIT) using Python and leveraging third-party libraries and open-source software capabilities typically unavailable within ArcGIS. MERIT will save US Army Corps of Engineers (USACE) districts significant time in data acquisition, processing, and analysis by allowing a user to move from image acquisition and preprocessing to a final output for decision-making with one application. Although we designed MERIT for use in wetlands research, many tools have regional or global relevancy for a variety of environmental monitoring initiatives.
APA, Harvard, Vancouver, ISO, and other styles
6

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography