Journal articles on the topic 'Manual data processing'

To see the other types of publications on this topic, follow the link: Manual data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Manual data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Melucci, Dora. "Manual Data Processing in Analytical Chemistry: Linear Calibration." Journal of Chemical Education 85, no. 10 (October 2008): 1346. http://dx.doi.org/10.1021/ed085p1346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Riestiawan and Indah Ariyati. "Komparasi Pengolahan Data Keuangan Manual Dengan Pengolahan Data Keuangan Menggunakan Zahir Accounting Versi 5.1." Journal of Students‘ Research in Computer Science 3, no. 1 (May 30, 2022): 89–98. http://dx.doi.org/10.31599/jsrcs.v3i1.1478.

Full text
Abstract:
The development of information technology has penetrated all areas of life in all parts of the world, which is triggered by the increasing complexity of business activities and the increasing need for financial information. Processing of financial data on CV. Akbar Motor is recorded manually using sheets of paper so that the recording still does not work optimally. In addition, manual data processing is quite time-consuming, labor-intensive, and large costs which causes late financial statements. Researchers process financial data manually then continue using Zahir Accouting Software Version 5.1 as a comparison. The research methods used in collecting data consist of observations, interviews and literature studies. Comparison between manual processing of financial data with Zahir Accounting version 5.1. aims to find out the financial condition of an enterprise in a certain period in the hope that it can be used in decision making quickly and accurately. Keywords: Comparative, Finance, Reports, Zahir Abstrak Perkembangan teknologi informasi telah merambah di segala bidang kehidupan di seluruh belahan penjuru dunia, yang dipicu dengan meningkatnya kompleksitas kegiatan usaha dan meningkatnya kebutuhan akan informasi keuangan. Pengolahan data keuangan pada CV. Akbar Motor dicatat secara manual menggunakan lembaran–lembaran kertas sehingga pencatatan masih belum bekerja secara maksimal. Selain itu pengolahan data secara manual cukup memakan waktu, tenaga, dan biaya besar yang menyebabkan terlambatnya laporan keuangan. Peneliti melakukan pengolahan data keuangan secara manual kemudian dilanjutkan dengan menggunakan Software Zahir Accouting Versi 5.1 sebagai komparasinya. Metode penelitian yang di gunakan dalam mengumpulkan data terdiri dari pengamatan, wawancara dan studi pustaka. Komparasi antara pengolahan data keuangan secara manual dengan Zahir Accounting versi 5.1. bertujuan untuk mengetahui kondisi keuangan suatu perusahaan ini dalam periode tertentu dengan harapan dapat di gunakan dalam pengambilan keputusan secara cepat tepat dan akurat Kata kunci: Keuangan, Komparasi, Laporan, Zahir
APA, Harvard, Vancouver, ISO, and other styles
3

Zhong, Li Juan, and Jing Lin Tong. "Research of Automatic Camshaft Detection and Data Processing Technology." Applied Mechanics and Materials 42 (November 2010): 404–7. http://dx.doi.org/10.4028/www.scientific.net/amm.42.404.

Full text
Abstract:
The traditional optical-mechanical measurement instrument as well as manual data processing methods has unable to meet the efficient and accurate detection requirements for camshaft. In this article, the principle of automatic camshaft detection technology and the control system’s structure are introduced. The data acquisition system independent of movement controller and the data processing method of "all the sensitive" and evaluation method are introduced sequent.
APA, Harvard, Vancouver, ISO, and other styles
4

Barfod, Adrian S., Léa Lévy, and Jakob Juul Larsen. "Automatic processing of time domain induced polarization data using supervised artificial neural networks." Geophysical Journal International 224, no. 1 (September 26, 2020): 312–25. http://dx.doi.org/10.1093/gji/ggaa460.

Full text
Abstract:
SUMMARY Processing of geophysical data is a time consuming task involving many different steps. One approach for accelerating and automating processing of geophysical data is to look towards machine learning (ML). ML encompasses a wide range of tools, which can be used to automate complicated and/or tedious tasks. We present strategies for automating the processing of time-domain induced polarization (IP) data using ML. An IP data set from Grindsted in Denmark is used to investigate the applicability of neural networks for processing such data. The Grindsted data set consists of eight profiles, with approximately 2000 data curves per profile, on average. Each curve needs to be processed, which, using the manual approach, can take 1–2 hr per profile. Around 20 per cent of the curves were manually processed and used to train and validate an artificial neural network. Once trained, the network could process all curves, in 6–15 s for each profile. The accuracy of the neural network, when considering the manual processing as a reference, is 90.8 per cent. At first, the network could not detect outlier curves, that is where entire chargeability curves were significantly different from their spatial neighbours. Therefore, an outlier curve detection algorithm was developed and implemented to work in tandem with the network. The automatic processing approach developed here, involving the neural network and the outlier curve detection, leads to similar inversion results as the manual processing, with the two significant advantages of reduced processing times and enhanced processing consistency.
APA, Harvard, Vancouver, ISO, and other styles
5

Altyntsev, Maxim, and Karkokli Hamid Majid Saber. "PECULIARITIES OF PRELIMINARY MOBILE LASER SCANNING DATA PROCESSING." Interexpo GEO-Siberia 1, no. 1 (2019): 239–48. http://dx.doi.org/10.33764/2618-981x-2019-1-1-239-248.

Full text
Abstract:
The goal of preliminary mobile laser scanning (MLS) data processing is generating a unified point cloud in a required coordinate system. During this processing calibration of 2D scanners and digital cameras, point cloud adjustment, data filtering such as removal of noise and remirror points. Currently huge amount of software is developed for solving these tasks, but a degree of their auto-mation differs. Depending on software, type of scanned area preliminary MLS data processing technique can differ. The analysis of carried out scanning results with the task of revealing their pe-culiarities, determination of the preliminary data processing order and deciding about necessity to accept additional manual procedures.
APA, Harvard, Vancouver, ISO, and other styles
6

Fang, Hai Yan, Guo Ping Zhang, Feng Gao, Xiao Ping Zhao, Peng Shen, and Shu Fang Wang. "Comparison of Auto and Manual Integration for Peptidomics Data Based on High Performance Liquid Chromatography Coupled with Mass Spectrometry." Advanced Materials Research 340 (September 2011): 266–72. http://dx.doi.org/10.4028/www.scientific.net/amr.340.266.

Full text
Abstract:
A growing number of literatures appealed the necessity to develop methods of data processing for peptidome profiling and analysis. Although some methods had been established, many of them focused on the development and application of auto integration softwares. In this work, we paid attention to comparison of auto integration by software and manual integration for peptidomics data based on high performance liquid chromatography coupled with mass spectrometry (HPLC-MS). Two data processing procedures, auto integration by XCMS and manual integration, were applied in processing of peptidomics data based on HPLC-MS from cerebral infarction and breast cancer patients blood samples, respectively. And, it was found that almost all peaks contained in chromatograms could be picked out by XCMS, but the areas of these peaks were greatly different from those given by manual integration. Furthermore, t-test (2-tailed) results of the two data processing procedures were also different and different potential biomarkers were obtained. The results of this work will provide helpful reference for data processing of peptidomics research.
APA, Harvard, Vancouver, ISO, and other styles
7

MOSKALEV, P. Y. "A MODERN APPROACH TO DATA PROCESSING OF PAST YEARS." Neft i gaz 6, no. 120 (April 15, 2020): 52–58. http://dx.doi.org/10.37878/2708-0080/2020-5.038.

Full text
Abstract:
This paper provides a brief description of the process and the results of reprocessing old seismic data of CDPP 2D, worked out in Soviet times and rewritten from old media immediately before reprocessing. Experience has shown that it is possible to obtain material of good quality, suitable not only for structural, but also for dynamic interpretation, with a more expressive reflection of the structure of deep horizons. Considerable efforts, including the cost of manual labor, required for such reprocessing, can pay off in the end with a good result.
APA, Harvard, Vancouver, ISO, and other styles
8

Ambarsari, Diah Ayu. "Sistem Informasi Pengolahan Data Nilai Siswa Berbasis Website Pada MTs Mishbahul Falah Batangan." Computer Science (CO-SCIENCE) 1, no. 1 (January 25, 2021): 44–52. http://dx.doi.org/10.31294/coscience.v1i1.190.

Full text
Abstract:
Madrasah Tsanawiyah Mishbahul Falah is an educational institution, but in the process of calculating grades it is still classified as manual because technically the subject teacher provides a list of student scores to the principal, then the class teacher copies the student's scores into a report card based on the value data. Assessing that processing is still manual makes the longer work done, the greater the error rate in processing. The purpose of this study is to build applications that can facilitate value processing using the waterfall method. The initial stage in this research is to collect data for analysis, system analysis which will be used includes analysis of hardware, software, users, technology, system design, database, interface, system implementation, and system testing. The results of this study are Web-Based Student Value Data Processing Applications. Applications that are made can store school data, including teachers, classes, subjects, students, academic years, and carry out the assessment process. This application is expected to be able to solve the problems of the teacher or principal in the process of processing student value data.
APA, Harvard, Vancouver, ISO, and other styles
9

Purwanto, Joko, and Renny Renny. "Perancangan Data Warehouse Rumah Sakit Berbasis Online Analytical Processing (OLAP)." Jurnal Teknologi Informasi dan Ilmu Komputer 8, no. 5 (October 21, 2021): 1077. http://dx.doi.org/10.25126/jtiik.2021854232.

Full text
Abstract:
<p class="BodyCxSpFirst">Pemanfaatan teknologi informasi sangat penting bagi rumah sakit, karena berpengaruh pula terhadap kualitas pelayanan kesehatan yang secara manual diubah menjadi digital dengan menggunakan teknologi informasi.Dalam penelitian ini penulis menggunakan metodologi <em>Nine step</em> sebagai acuan dalam merancang suatu <em>data warehouse</em><em>,</em> untuk pemodelan menggunakan skema konstelasi fakta dengan 3 tabel fakta dan 11 tabel dimensi. Perbedaan penelitian ini dengan penelitian sebelumnya terletak pada sumber data yang diekstrak langsung dari <em>database</em> SIMRS yang digunakan rumah sakit, sehingga tidak ada ekstraksi data secara manual.Penelitian ini bertujuan untuk menghasilkan desain data warehouse berbasis Online Analytical Processing (OLAP) sebagai sarana penunjang kualitas pelayanan kesehatan rumah sakit. OLAP yang dihasilkan akan berupa desain data warehouse dengan berbagai dimensi yang akan menghasilkan tampilan informasi berupa Chart maupun Grafik sehingga informasinya mudah dibaca dan dipahami oleh berbagai pihak.</p><p class="BodyCxSpFirst"> </p><p class="BodyCxSpFirst"><em><strong>Abtract</strong></em></p><p class="BodyCxSpFirst"><em>The use of information technology is very important for hospitals, because it also affects the quality of health services, which manualy changed to digital using information technology. In this study, the authors used the Nine step methodology as a reference in designing a data warehouse for modeling using a fact constellation schema with 3 fact tables and 11 dimension tables. the different in this study from previous research is that the data source was taken directly from the SIMRS database used by the hospital, so there is no manual data extraction.</em><em>The aim of this research is to be able to produce a Data Warehouse design based on Online Analytical Processing (OLAP) as a means of supporting the quality of hospital health services. The resulting OLAP will be a data warehouse design with various dimensions will produce the displays information in the form of a graph or chart so that the information is easy to read and understand by various parties.</em></p><p class="BodyCxSpLast"><em> </em></p><p class="BodyCxSpFirst"><em><strong><br /></strong></em></p>
APA, Harvard, Vancouver, ISO, and other styles
10

Kostyukova, Elena Ivanovna, Victoria Samvelovna Germanova, and Alexander Vitalyevich Frolov. "Digitalization of accounting as a result of automated data processing." Buhuchet v sel'skom hozjajstve (Accounting in Agriculture), no. 10 (October 1, 2020): 24–31. http://dx.doi.org/10.33920/sel-11-2010-02.

Full text
Abstract:
Today, one of the priorities of international economic development is digitalization. It makes changes in all areas of our life, and the transformation of the economy, based on the drivers of information development, determines the importance of updating the information environ- ment of the new economy, which directly affects accounting. Currently, accounting is in a phase of gradual development and introduction of new technologies. This aspect is particularly important in the context of the rapid development of information and communication technologies and global digitalization, especially when translating accounting into a new information environment, defining its boundaries, conceptual scope, and confirming the self-sufficiency of accounting as a type of socio-economic and management practice. New requirements for employees and their qualifications are also being introduced. The automated form of accounting differs from the manual form in the uniform execution of operations. When a computer processes similar accounting operations, the same commands are used, virtually eliminating the occurrence of random errors, and the potential for control by the management organization is increased. Because computer programs provide the administration with a fairly wide range of tools that allow you to monitor and evaluate the company’s activities. Some mistakes made by the accountant of the manual accounting course are almost inevitable, since it is the human factor that plays a big role in them, but the use of an automated accounting form allows you to recognize and correct these errors in a timely manner.
APA, Harvard, Vancouver, ISO, and other styles
11

Haryanto, Desi Anis Anggraini, Miftachul Ulum, and Achmad Fiqhi Ibadillah. "Image Processing Based Aquaponics Monitoring System." JEEE-U (Journal of Electrical and Electronic Engineering-UMSIDA) 5, no. 1 (March 28, 2021): 37–59. http://dx.doi.org/10.21070/jeeeu.v5i1.1220.

Full text
Abstract:
Aquaponics means a culture that is very necessary to be applied, because in this system it is a combination of fish farming techniques as well as plant enlargement techniques by hydroponics. This research develops a smart aquaponics system that can control and increase the acidity level, air temperature, fish feed, and the installation of a camera to monitor fish development. In this system, there are sensors installed to retrieve data. Thus, air quality and circulation is well maintained. The results obtained from this study are to test an automatic feed system that runs well for each experiment, with an accurate proportion of 93.33%, and PH measurements that have been calibrated run well, the comparison of manual measurements using the PH meter measurement sensor gets the proportion 97, 83. for the meter Flow measurement results obtained a proportion of 91.02%, then for plant development every week got pretty good results, in the first week the plants grew 1cm after sowing, 3cm for the 2nd week, 7cm for the second week. -3. The results of measuring the weight of fish using image processing are not much different from manual measurements, the length of the fish is measured manually, it is 7 cm, and in the image it is 5.6 cm, the weight of manual fish is 11g, in the image it is 11.66g. Keywords: Aquaponics; Camera; Android; image processing, flow. Abstrak. Akuaponik merupakan suatu budaya yang sangat disarankan untuk diterapkan, karena pada sistem ini berupa kombinasi dari teknik budidaya ikan sekaligus teknik pembesaran tanaman dengan cara hidroponik. Penelitian ini merancang sistem akuaponik pintar yang bisa mengendalikan dan pantau tingkat keasaman, pakan ikan, dan pemasangan kamera untuk memantau perkembangan ikan. Dalam sistem ini, ada sensor yang dipasang untuk mengambil data,. Dengan demikian, kualitas dan sirkulasi air terjaga dengan baik. Hasil yang didapat dari penelitian ini yaitu untuk pengujian sistem pakan otomatis berjalan dengan cukup baik, dengan persentase keberhasilan 93.33 %, untuk pengukuran PH yang sudah terkalibrasi berjalan dengan baik, perbandingan pengukuran manual dengan pengukuran menggunakan sensor PH meter mendapatkan persentase keberhasilan 97.83% untuk hasil pengukuran sensor Flow meter didapatkan persentase keberhasilan sebesar 91.02%, selanjutnya untuk perkembangan tanaman setiap minggu mendapatkan hasil yang cukup baik, pada minggu pertama tanaman diperkirakan tumbuh 1cm setelah penyemaian, 3 cm untuk minggu ke-2, 7cm untuk minggu ke-3. Pengukuran berat ikan menggunakan Image processing mendapatkan hasil yang tidak jauh berbeda dengan pengukuran secara manual, panjang ikan yang diukur secara manual yaitu 7 cm, dan secara image yaitu 5.6 cm, berat ikan manual 11g, secara image 11.66g
APA, Harvard, Vancouver, ISO, and other styles
12

Wu, Ke He, Wen Chao Cui, Bo Hao Cheng, and Qian Yuan Zhang. "Triangulation-Based Point Cloud Data Processing Technology Research and Application." Applied Mechanics and Materials 610 (August 2014): 729–33. http://dx.doi.org/10.4028/www.scientific.net/amm.610.729.

Full text
Abstract:
With the "Digital Earth" concept being put forward, people are starting to focus on geospatial information technology. Traditional manual building modeling process is gradually eliminated by history due to cumbersome and inefficient work. With massive data storage and processing technologies emerging and improving, people begin to explore building point cloud data measured by laser radar technology and to use point cloud data processing software for further building boundary extraction. In the model boundary extraction process, the use of prototype with the model fit is a good, clear and easy programming algorithm and triangulation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Hui, and Chunmei Liu. "A Dynamic Data-Driven Framework for Biological Data Using 2D Barcodes." Computational and Mathematical Methods in Medicine 2012 (2012): 1–5. http://dx.doi.org/10.1155/2012/892098.

Full text
Abstract:
Biology data is increasing exponentially from biological laboratories. It is a complicated problem for further processing the data. Processing computational data and data from biological laboratories manually may lead to potential errors in further analysis. In this paper, we proposed an efficient data-driven framework to inspect laboratory equipment and reduce impending failures. Our method takes advantage of the 2D barcode technology which can be installed on the specimen as a trigger for the data-driven system. For this end, we proposed a series of algorithms to speed up the data processing. The results show that the proposed system increases the system's scalability and flexibility. Also, it demonstrates the ability of linking a physical object with digital information to reduce the manual work related to experimental specimen. The characteristics such as high capacity of storage and data management of the 2D barcode technology provide a solution to collect experimental laboratory data in a quick and accurate fashion.
APA, Harvard, Vancouver, ISO, and other styles
14

Sheppard, Alison M., Karen J. Coles, Ann M. Fehily, and Ruth M. Holliday. "A direct data entry system for processing dietary records: comparison with manual coding." Journal of Human Nutrition and Dietetics 3, no. 3 (June 1990): 209–14. http://dx.doi.org/10.1111/j.1365-277x.1990.tb00238.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Suparni, Suparni, Lilyani Asri Utami, and Elsa Dwi Selviana. "Property Sales Data Processing Information System (SiPendar)." SinkrOn 3, no. 2 (March 16, 2019): 180. http://dx.doi.org/10.33395/sinkron.v3i2.10084.

Full text
Abstract:
PT. Pratama Mega Konstruksindo is one of the companies engaged in Property, especially Housing. One of the fields that requires technological progress is one of them is the property sector, the rapid development in the property sector is currently urging property service companies to meet the demands of the wider community. Implementation of work related to housing sales. In managing the data, this company still uses a manual system, starting from the recording and calculation aspects so that its performance has not been effective. At PT Pratama Mega Konstruksindo this still manages data using Ms Excel. As well as down payment, cash payments and consumer data are recorded using Ms Excel. This can cause errors in recording transactions, data security that is not guaranteed confidentiality, ineffective employees at work because it requires more time to input and make sales reports and even loss of data. Therefore, PT. Pratama Mega Konstruksindo requires a system that can solve the problem. This data processing system is designed web-based using the PHP and MySql programming languages as data storage databases. With the existence of this website, it can help processing sales data more effectively and efficiently, reports can be printed in realtime and data security can be maintained
APA, Harvard, Vancouver, ISO, and other styles
16

Sunardi, Sunardi, and Sofiansyah Fadli. "SISTEM INFORMASI PENGOLAHAN DATA KELAPA SAWIT BERBASIS CLIENT-SERVER." Jurnal Manajemen Informatika dan Sistem Informasi 1, no. 2 (August 30, 2018): 23. http://dx.doi.org/10.36595/misi.v1i2.44.

Full text
Abstract:
Currently the development of information technology increasingly widespread, this is in line with the development of computers that are increasingly rapidly. Various technologies can now be created so as to facilitate all human activity. Sophistication of information technology has now become one of the requirements that must be met in order to keep up with technology. Technology and information that can not be separated from one another. But now there are many companies that have not implemented such a system because they various things. One problem is the lack of knowledge about the company's computer system, resulting in the company using the manual method, for example in the process of data processing. In this case PT. Citra Riau Sarana as one of the companies engaged in plantations, especially in the buying and selling palm oil CPO (Crude Palm Oil), which is still occurring problems in the data processing is still manual, so experiencing difficulties and frequent errors in data processing. The method used is the method of data collection by observation, interviews, library research, design models described using data flow diagrams (DAD) and entity relationship diagram (ERD). This application was built using Delphi 7 and SQL Server 2005. Applications that use the Desktop and Client-Server based, hoped to use this application the Company may reduce the level of unwanted errors and speed up data processing.
APA, Harvard, Vancouver, ISO, and other styles
17

Ntsoko, T., and G. Sithole. "Enhancing Manual Scan Registration Using Audio Cues." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4 (April 23, 2014): 187–94. http://dx.doi.org/10.5194/isprsarchives-xl-4-187-2014.

Full text
Abstract:
Indoor mapping and modelling requires that acquired data be processed by editing, fusing, formatting the data, amongst other operations. Currently the manual interaction the user has with the point cloud (data) while processing it is visual. Visual interaction does have limitations, however. One way of dealing with these limitations is to augment audio in point cloud processing. Audio augmentation entails associating points of interest in the point cloud with audio objects. In coarse scan registration, reverberation, intensity and frequency audio cues were exploited to help the user estimate depth and occupancy of space of points of interest. Depth estimations were made reliably well when intensity and frequency were both used as depth cues. Coarse changes of depth could be estimated in this manner. The depth between surfaces can therefore be estimated with the aid of the audio objects. Sound reflections of an audio object provided reliable information of the object surroundings in some instances. For a point/area of interest in the point cloud, these reflections can be used to determine the unseen events around that point/area of interest. Other processing techniques could benefit from this while other information is estimated using other audio cues like binaural cues and Head Related Transfer Functions. These other cues could be used in position estimations of audio objects to aid in problems such as indoor navigation problems.
APA, Harvard, Vancouver, ISO, and other styles
18

Yu, Meng-Lin, and Meng-Han Tsai. "ACS: Construction Data Auto-Correction System—Taiwan Public Construction Data Example." Sustainability 13, no. 1 (January 3, 2021): 362. http://dx.doi.org/10.3390/su13010362.

Full text
Abstract:
This study aims to develop an automatic data correction system for correcting the public construction data. The unstructured nature of the construction data presents challenges for its management. The different user habits, time-consuming system operation, and long pretraining time all make the data management system full of data in an inconsistent format or even incorrect data. Processing the construction data into a machine-readable format is not only time-consuming but also labor-intensive. Therefore, this study used Taiwan’s public construction data as an example case to develop a natural language processing (NLP) and machine learning-based text classification system, coined as automatic correction system (ACS). The developed system is designed to automatically correct the public construction data, meanwhile improving the efficiency of manual data correction. The ACS has two main features: data correction that converts unstructured data into structured data; a recommendation function that provides users with a recommendation list for manual data correction. For implementation, the developed system was used to correct the data in the public construction cost estimation system (PCCES) in Taiwan. We expect that the ACS can improve the accuracy of the data in the public construction database to increase the efficiency of the practitioners in executing projects. The results show that the system can correct 18,511 data points with an accuracy of 76%. Additionally, the system was also validated to reduce the system operation time by 51.69%.
APA, Harvard, Vancouver, ISO, and other styles
19

Sobri, Ahmad, and Hasanah Tri. "PROCESSING OF THE PROCESSING INFORMATION SYSTEM POPULATION DATA IN KATURAHAN KATIAHAN WEBMOBILE BASED." IJISCS (International Journal of Information System and Computer Science) 3, no. 3 (December 24, 2019): 107. http://dx.doi.org/10.56327/ijiscs.v3i3.804.

Full text
Abstract:
Management of population administration data of related institutions plays an important role in the process of collecting population data that is relevant and can be a guide for an area that is related to the population census. In this case the population data system is one of the factors that influences population data in the area of Air Kati Village, Lubuklinggau Selatan I District, Lubuklinggau City. Air Kati Urban Village in conducting population data collection still uses manual methods, namely by visiting residents one by one to find out the number of each family head. If the data collection officer comes to a resident's house often experiences obstacles, there is no person or family head in the house. This has become an obstacle for officials in assessing population data to collect data quickly. Based on this observation the author's desire arises to create an information system that can help officers in population data collection in Air Kati village, Lubuklinggau Selatan I District, Lubuklinggau City. The purpose of this study is expected to process the management of population information systems in the data collection that is done quickly and effectively. With the web mobile based population information system design, it can facilitate the admin in inputting administrative data of the family head community in Air Kati Urban Village, reducing the time needed, reducing the possibility of errors in recording and speeding up the search process
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Bei, Yan Bai, and Zhi Yi Zhang. "Design of Automatic Output of Carbon Electrode Processing System Testing Tag Based on WinCC." Applied Mechanics and Materials 602-605 (August 2014): 2114–17. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.2114.

Full text
Abstract:
For a long time, the parameters of carbon electrode product are obtained by manual testing record which filled in the labels. This paper introduces the function of Global Script and User Archive in WinCC, which achieves printing labels automatically on the test data obtained from the processing of carbon electrode’s automatic process system. Avoid some mistakes caused by manual detection, the data storage and data query in WinCC are very convenient.
APA, Harvard, Vancouver, ISO, and other styles
21

Gimaletdinova, G. K., and E. Kh Dovtaeva. "Sentiment Analysis of Reader Comments: Automated vs Manual Text Processing." Uchenye Zapiski Kazanskogo Universiteta. Seriya Gumanitarnye Nauki 163, no. 1 (2021): 65–80. http://dx.doi.org/10.26907/2541-7738.2021.1.65-80.

Full text
Abstract:
The verbal and structural features of the reader comment, a genre of Internet communication, were studied. The method of sentiment analysis (ParallelDots API) was used to reveal and measure the emotive component of the reader comments (N = 3000) in the English and Russian languages. The results obtained were verified by the manual linguistic text analysis. The experts were specialists in the field of philology of the English and Russian languages (N = 6), students of philology, as well as native speakers of the Russian language for whom English is a foreign language, i.e., their level of proficiency is C1 (N = 4). As a result of the comparison of the data collected through the automated and manual text processing, a number of factors that reduce the reliability of the results of automated sentiment analysis of the reader comments were singled out. Difficulties hindering the objective determination of the sentiment by the program were found in the reader comments in both analyzed languages. This is indicative of the structural similarities between the English and Russian reader comments at the lexical and syntactic levels. The feasibility of the mixed automated and manual text processing in order to obtain more detailed and objective data was demonstrated. The results of this work can be used for comparative studies of two or more languages performed by the method of sentiment analysis, as well as for drawing parallels between the lexical, grammatical, and cultural components of languages.
APA, Harvard, Vancouver, ISO, and other styles
22

Pane, Ivransa Zuhdi. "Rancang Bangun Piranti Lunak Pengolah Data Pasca Pengujian Terowongan Angin Kecepatan Rendah Indonesia." Jurnal ULTIMATICS 7, no. 2 (August 1, 2016): 131–38. http://dx.doi.org/10.31937/ti.v7i2.360.

Full text
Abstract:
Data post-processing plays important roles in a wind tunnel test, especially in supporting the validation of the test results and further data analysis related to the design activities of the test objects. One effective solution to carry out the data post-processing in an automated productive manner, and thus eliminate the cumbersome conventional manual way, is building a software which is able to execute calculations and have abilities in presenting and analyzing the data in accordance with the post-processing requirement. Through several prototype development cycles, this work attempts to engineer and realize such software to enhance the overall wind tunnel test activities. Index Terms—software engineering, wind tunnel test, data post-processing, prototype, pseudocode
APA, Harvard, Vancouver, ISO, and other styles
23

Pauziah, Ulfa, Dewi Mustari, and Triyani Akhirina. "Social Service (PKM) of Application of Population Data Collection in RT. 004, Kalisuren." Mattawang: Jurnal Pengabdian Masyarakat 1, no. 2 (December 4, 2020): 95–98. http://dx.doi.org/10.35877/454ri.mattawang236.

Full text
Abstract:
The development of technology in modern times today encourages every aspect of work to be done through the computerization, with the aim to facilitate the processing of data. In addition, by the application of technology, the time and place for data storage will be more efficient, resulting in accurate and accurate residents’ data, and can be conveyed properly to those in need. Manual data processing is considered less effective in processing and calculating the large data, and this can result in the data loss. These undesirable mistakes will be very detrimental to the development of residents’ data contained in RT 04, Kalisuren. By the presence of the data collection application for the residents in RT 04, Kalisuren, the data collection process for both births and deaths that occurred in RT 04 Kalisuren can be carried out properly. Abstrak Teknologi yang semakin berkembang mendorong setiap aspek untuk merubah pekerjaan yang tadinya dikerjakan secara manual menjadi komputerisasi. Dengan tujuan mepermudah dalam proses pengelohan data. Selain itu dengan adanya aplikasi dapat mengefisienkan waktu dan tempat penyimpanan data. Oleh karena itu untuk menghasilkan data warga yang tepat dan akurat, agar dapat disampaikan dengan baik kepada yang membutuhkan. Pengolahan data secara manual bukanlah suatu hal yang salah, namun cara tersebut kurang efektif untuk melakukan pengolahan dan perhitungan data yang besar, dan hal ini dapat mengakibatkan terjadinya kehilangan data. Kesalahan-kesalahan yang tidak diinginkan itu akan sangat merugikan untuk perkembangan pedataan warga yang terdapat pada RT 04 Kalisuren. Dengan adanya aplikasi pendataan warga RT 04 kalisuren dapat membantu RT dalam proses pendataan baik kelahiran ataupun kematian yng terjadi di RT 04 Kalisuren.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Ling Fei, and Sheng Yu Wang. "The Equal Precision Measurement Data Processing System Based on Virtual Instrument." Advanced Materials Research 981 (July 2014): 647–52. http://dx.doi.org/10.4028/www.scientific.net/amr.981.647.

Full text
Abstract:
During quantitative analysis on measured object by using equal precision measurement method, there remains certain difference between the measuring result and truth value due to the impact on measuring method, measuring tools and measuring environment. In order to reduce measurement error, we usually make continuous equal precision measurements on the measured object repeatedly. Then we get the final result by theoretical calculations, error analysis and dispose on measurement data. The data processing shows complicated and error-prone .But now we take computer as a carrier, then combining with virtual instrument technology to accomplish the data-processing system. It can cover the manual computation shortage and can take humanization disposal on measurement data. Moreover, the results can show with multi-mode intuitively.
APA, Harvard, Vancouver, ISO, and other styles
25

Surniandari, Artika, and Suherman Suherman. "Pengolahan Data Keuangan Menggunakan MYOB Premier V16 Pada Bengkel Berkah." JAIS - Journal of Accounting Information System 1, no. 01 (June 1, 2021): 09–13. http://dx.doi.org/10.31294/jais.v1i01.876.

Full text
Abstract:
Bengkel Berkah Motor is a company engaged in the automotive trade and services sector that sells motorbike spare parts and repair services, Bengkel Berkah still applies a traditional manual system and has not been computerized in its business process. So it is very possible for data manipulation and errors in recording and calculating sales and purchase transactions of goods. For this reason, the Blessing Workshop requires a computerized system to assist in recording transactions and financial reports. Manual financial data processing systems require a relatively long time and require skill and accuracy in presenting financial statements. On this occasion, the authors made an example of the process of accounting records at the Motor Blessing Workshop which was previously done manually by the author, proposed to be done computerized using Myob Premier V16. Myob Premier is a computerized accounting application that provides convenience in the business administration process by integrating general ledger, finance, purchasing, sales, inventory and relationship processing functions starting from document input to financial reports, using Myob Premier is one of the best solutions for solve existing problems and with a computerized system can produce accurate financial reports, effective and efficient, making it easier for owners
APA, Harvard, Vancouver, ISO, and other styles
26

Pancerasa, Mattia, Matteo Sangiorgio, Roberto Ambrosini, Nicola Saino, David W. Winkler, and Renato Casagrandi. "Reconstruction of long-distance bird migration routes using advanced machine learning techniques on geolocator data." Journal of The Royal Society Interface 16, no. 155 (June 2019): 20190031. http://dx.doi.org/10.1098/rsif.2019.0031.

Full text
Abstract:
Geolocators are a well-established technology to reconstruct migration routes of animals that are too small to carry satellite tags (e.g. passerine birds). These devices record environmental light-level data that enable the reconstruction of daily positions from the time of twilight. However, all current methods for analysing geolocator data require manual pre-processing of raw records to eliminate twilight events showing unnatural variation in light levels, a step that is time-consuming and must be accomplished by a trained expert. Here, we propose and implement advanced machine learning techniques to automate this procedure and we apply them to 108 migration tracks of barn swallows ( Hirundo rustica ). We show that routes reconstructed from the automated pre-processing are comparable to those obtained from manual selection accomplished by a human expert. This raises the possibility of fully automating light-level geolocator data analysis and possibly analysing the large amount of data already collected on several species.
APA, Harvard, Vancouver, ISO, and other styles
27

Hou, Zhi-Wei, Cheng-Zhi Qin, A.-Xing Zhu, Peng Liang, Yi-Jie Wang, and Yun-Qiang Zhu. "From Manual to Intelligent: A Review of Input Data Preparation Methods for Geographic Modeling." ISPRS International Journal of Geo-Information 8, no. 9 (August 28, 2019): 376. http://dx.doi.org/10.3390/ijgi8090376.

Full text
Abstract:
One of the key concerns in geographic modeling is the preparation of input data that are sufficient and appropriate for models. This requires considerable time, effort, and expertise since geographic models and their application contexts are complex and diverse. Moreover, both data and data pre-processing tools are multi-source, heterogeneous, and sometimes unavailable for a specific application context. The traditional method of manually preparing input data cannot effectively support geographic modeling, especially for complex integrated models and non-expert users. Therefore, effective methods are urgently needed that are not only able to prepare appropriate input data for models but are also easy to use. In this review paper, we first analyze the factors that influence data preparation and discuss the three corresponding key tasks that should be accomplished when developing input data preparation methods for geographic models. Then, existing input data preparation methods for geographic models are discussed through classifying into three categories: manual, (semi-)automatic, and intelligent (i.e., not only (semi-)automatic but also adaptive to application context) methods. Supported by the adoption of knowledge representation and reasoning techniques, the state-of-the-art methods in this field point to intelligent input data preparation for geographic models, which includes knowledge-supported discovery and chaining of data pre-processing functionalities, knowledge-driven (semi-)automatic workflow building (or service composition in the context of geographic web services) of data preprocessing, and artificial intelligent planning-based service composition as well as their parameter-settings. Lastly, we discuss the challenges and future research directions from the following aspects: Sharing and reusing of model data and workflows, integration of data discovery and processing functionalities, task-oriented input data preparation methods, and construction of knowledge bases for geographic modeling, all assisting with the development of an easy-to-use geographic modeling environment with intelligent input data preparation.
APA, Harvard, Vancouver, ISO, and other styles
28

Siregar, M. Ilham A. "INFORMATION SYSTEM OF ADMINISTRATION ON EKASAKTI UNIVERSITY OF GRADUATE PROGRAM." Unes journal of Information System 1, no. 2 (December 31, 2016): 099. http://dx.doi.org/10.31933/ujis.1.2.099-109.2016.

Full text
Abstract:
Data processing system administration at Ekasakti Padang University of Graduate Program still use manual way in data processing, therefore, the author conducted research at University Graduate Program Ekasakti Padang by collecting information and data to be processed to be made an information system administrative data processing student consists of data entry, data processing and report generation. This information system can generate; Student Data Reports, Payment Data Report, and report graduation data. The system is designed with the needs in Padang Ekasakti University Graduate Program in order to be more effective and efficient, and is expected to administrative data processing can be optimized.
APA, Harvard, Vancouver, ISO, and other styles
29

Chafiq, Tarik, Mohammed Ouadoud, Hassane Jarar Oulidi, and Ahmed Fekri. "Application of Data Integrity Algorithm for Geotechnical Data Quality Management." International Journal of Interactive Mobile Technologies (iJIM) 12, no. 8 (December 24, 2018): 85. http://dx.doi.org/10.3991/ijim.v12i8.9569.

Full text
Abstract:
The aim of this research work is to ensure the integrity and correction of the geotechnical database which contains anomalies. These anomalies occurred mainly in the phase of inputting and/or transferring of data. The algorithm created in the framework of this paper was tested on a dataset of 70 core drillings. In fact, it is based on a multi-criteria analysis qualifying the geotechnical data integrity using the sequential approach. The implementation of this algorithm has given a relevant set of values in terms of output; which will minimalize processing time and manual verification. The application of the methodology used in this paper could be useful to define the type of foundation adapted to the nature of the subsoil, and thus, foresee the adequate budget.
APA, Harvard, Vancouver, ISO, and other styles
30

Yu, Feng, Qisheng Wang, Minjun Li, Huan Zhou, Ke Liu, Kunhao Zhang, Zhijun Wang, et al. "Aquarium: an automatic data-processing and experiment information management system for biological macromolecular crystallography beamlines." Journal of Applied Crystallography 52, no. 2 (March 14, 2019): 472–77. http://dx.doi.org/10.1107/s1600576719001183.

Full text
Abstract:
With the popularity of hybrid pixel array detectors, hundreds of diffraction data sets are collected at a biological macromolecular crystallography (MX) beamline every day. Therefore, the manual processing and recording procedure will be a very time-consuming and error-prone task. Aquarium is an automatic data processing and experiment information management system designed for synchrotron radiation source MX beamlines. It is composed of a data processing module, a daemon module and a web site module. Before experiments, the sample information can be registered into a database. The daemon module will submit data processing jobs to a high-performance cluster as soon as the data set collection is completed. The data processing module will automatically process data sets from data reduction to model building if the anomalous signal is available. The web site module can be used to monitor and inspect the data processing results.
APA, Harvard, Vancouver, ISO, and other styles
31

Tian, Guo Fu, Shi Jie Wang, Shu Hui Sun, and Zhong Wei Ren. "Application of Neural-Fuzzy System in Data Processing of Hydraulic Torque Converter’s Performance Test." Advanced Materials Research 308-310 (August 2011): 1782–87. http://dx.doi.org/10.4028/www.scientific.net/amr.308-310.1782.

Full text
Abstract:
Aimed at the need of data processing and analysis in the performance test of hydraulic torque converter, we put forward a way of test data analysis using neural-fuzzy system. It can reduce manual participation, improve the data processing ability, accelerate computing velocity and realize data handling automation. We can judge product’s performance by this method. The use in experiment data indicates that it can express the relation of original data nicely. The identifying accuracy of data is good. It can satisfy the request of test data analysis and the analytical conclusion is accurate too.
APA, Harvard, Vancouver, ISO, and other styles
32

McCormack, Michael D., David E. Zaucha, and Dennis W. Dushek. "First‐break refraction event picking and seismic data trace editing using neural networks." GEOPHYSICS 58, no. 1 (January 1993): 67–78. http://dx.doi.org/10.1190/1.1443352.

Full text
Abstract:
Interactive seismic processing systems for editing noisy seismic traces and picking first‐break refraction events have been developed using a neural network learning algorithm. We employ a backpropagation neural network (BNN) paradigm modified to improve the convergence rate of the BNN. The BNN is interactively “trained” to edit seismic data or pick first breaks by a human processor who judiciously selects and presents to the network examples of trace edits or refraction picks. The network then iteratively adjusts a set of internal weights until it can accurately duplicate the examples provided by the user. After the training session is completed, the BNN system can then process new data sets in a manner that mimics the human processor. Synthetic modeling studies indicate that the BNN uses many of the same subjective criteria that humans employ in editing and picking seismic data sets. Automated trace editing and first‐break picking based on the modified BNN paradigm achieve 90 to 98 percent agreement with manual methods for seismic data of moderate to good quality. Productivity increases over manual editing, and picking techniques range from 60 percent for two‐dimensional (2-D) data sets and up to 800 percent for three‐dimensional (3-D) data sets. Neural network‐based seismic processing can provide consistent and high quality results with substantial improvements in processing efficiency.
APA, Harvard, Vancouver, ISO, and other styles
33

Churnside, James H., Eirik Tenningen, and James J. Wilson. "Comparison of data-processing algorithms for the lidar detection of mackerel in the Norwegian Sea." ICES Journal of Marine Science 66, no. 6 (February 21, 2009): 1023–28. http://dx.doi.org/10.1093/icesjms/fsp026.

Full text
Abstract:
Abstract Churnside, J. H., Tenningen, E., and Wilson, J. J. 2009. Comparison of data-processing algorithms for the lidar detection of mackerel in the Norwegian Sea. – ICES Journal of Marine Science, 66: 1023–1028. A broad-scale lidar survey was conducted in the Norwegian Sea in summer 2002. Since then, various data-processing techniques have been developed, including manual identification of fish schools, multiscale median filtering, and curve fitting of the lidar profiles. In the automated techniques, applying a threshold to the data, as carrried out already to eliminate plankton scattering, has been demonstrated previously to improve the correlation between lidar and acoustic data. We applied these techniques to the lidar data of the 2002 survey and compared the results with those of a mackerel (Scomber scombrus) survey done by FV “Endre Dyrøy” and FV “Trønderbas” during the same period. Despite a high level of variability in both lidar and trawl data, the broad-scale distribution of fish inferred from the lidar agreed with that of mackerel caught by the FV “Endre Dyrøy”. This agreement was obtained using both manual and automated processing of the lidar data. This work is the first comparison of concurrent lidar and trawl surveys, and it demonstrates the utility of airborne lidar for mackerel studies.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Ning. "Financial Big Data Management and Control and Artificial Intelligence Analysis Method Based on Data Mining Technology." Wireless Communications and Mobile Computing 2022 (May 29, 2022): 1–13. http://dx.doi.org/10.1155/2022/7596094.

Full text
Abstract:
Driven by capital and Internet information (IT) technology, the operating scale and capital scale of modern industrial and commercial enterprises and various organizations have increased exponentially. At present, the manual-based financial work model has been unable to adapt to the changing speed of the modern business environment and the business rhythm of enterprises. All kinds of enterprises and organizations, especially large enterprises, urgently need to improve the operational efficiency of financial systems. By enhancing the integrity, timeliness, and synergy of financial information, it improves the comprehensiveness and ability of analyzing complex problems in financial analysis. It can cope with such rapid changes and help improve the financial management capabilities of enterprises. It provides more valuable decision-making guidance for business operations and reduces business risks. In recent years, the vigorous development of artificial intelligence technology has provided a feasible solution to meet the urgent needs of enterprises. Combining data mining, deep learning, image recognition, natural language processing, knowledge graph, human-computer interaction, intelligent decision-making, and other artificial intelligence technologies with IT technology to transform financial processes, it can significantly reduce the processing time of repetitive basic financial processes, reduce the dependence on manual accounting processing, and improve the work efficiency of the financial department. Through the autonomous analysis and decision-making of artificial intelligence, the intelligentization of financial management is realized, and more accurate and effective financial decision-making support is provided for enterprises. This paper studies the company’s intelligent financial reengineering process, so as to provide reference and reference for other enterprises to upgrade similar financial systems. The results of the analysis showed that at the level of α = 0.05 , there was a significant difference in the mean between the two populations. When the r value is in the range of -1 and 1, the linear relationship between the x and y variables is more obvious. This paper proposes decision-making suggestions and risk control early warning to the group decision-making body, or evaluates the financial impact of the group’s decision-making, and opens the road to financial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Jason, Ming-Hsien Yang, and Tian-Lih Koo. "A Control-Data-Mapping Entity-Relationship Model for Internal Controls Construction in Database Design." International Journal of Knowledge-Based Organizations 4, no. 2 (April 2014): 20–36. http://dx.doi.org/10.4018/ijkbo.2014040102.

Full text
Abstract:
The internal controls construction of a transaction system is important to management, operation and auditing. In the environment of manual operation, the internal controls of the transaction process are all done by manual mechanism. However, after the transaction processing environment has been changed from manual operation to computerized operation, the internal control techniques have been gradually transformed from manual mechanisms to computerized methods. The essence of internal controls in operational activities is the data expressions or constraints. The adoption of information systems often results in internal control deficiencies and operating risks due to the data unavailable in database for the data expressions of internal controls. Hence, how to design database schema to support internal controls mechanism is becoming a crucial issue for a computerized enterprise. Therefore, this paper referred Entity-Relationship model (ER model) in order to propose a Control-Data-Mapping Entity-relationship (CDMER) model by manipulating the required fields of tables to design database to support internal controls construction. Finally, a simple simulated case is prepared for illustration of the CDMER model. The contribution of this paper is to enhance the reliability of information systems through internal controls construction by applying the model to design databases.
APA, Harvard, Vancouver, ISO, and other styles
36

Padeli, Padeli, Gustina Gustina, and Muhammad Fiqih Firmansyah. "Sistem Informasi Pengolahan Data Atlet Berbasis Web Pada Disporabudpar Tangerang." Journal CERITA 8, no. 1 (February 8, 2022): 36–46. http://dx.doi.org/10.33050/cerita.v8i1.2129.

Full text
Abstract:
Information system needs cover almost all areas of life. Accurate, fast, and relevant information is needed in organizations. But in reality, due to the lack or limited use of information systems, sometimes it is not in sync with the wishes and expectations that are to be realized. Disporabudpar Tangerang Regency is an organization that needs to develop information technology that can change its data processing. The athlete data processing system at Disporabudpar is still in the form of manual files using documents and the stored data is still in the form of Ms. Word and Excel. This can slow down the process of inputting and processing data as well as the risk of errors and inaccuracies in writing which can cause the reporting process to take a lot of time. The research analysis method uses Waterfall and Unified Modeling Language (UML) as a design method based on the PHP programming language with MySQL database. Due to the above, a web-based athlete data processing information system is needed to be implemented at the Tangerang Disporabudpar.
APA, Harvard, Vancouver, ISO, and other styles
37

Yano, Naomine, Taro Yamada, Takaaki Hosoya, Takashi Ohhara, Ichiro Tanaka, Nobuo Niimura, and Katsuhiro Kusaka. "Status of the neutron time-of-flight single-crystal diffraction data-processing software STARGazer." Acta Crystallographica Section D Structural Biology 74, no. 11 (October 29, 2018): 1041–52. http://dx.doi.org/10.1107/s2059798318012081.

Full text
Abstract:
The STARGazer data-processing software is used for neutron time-of-flight (TOF) single-crystal diffraction data collected using the IBARAKI Biological Crystal Diffractometer (iBIX) at the Japan Proton Accelerator Research Complex (J-PARC). This software creates hkl intensity data from three-dimensional (x, y, TOF) diffraction data. STARGazer is composed of a data-processing component and a data-visualization component. The former is used to calculate the hkl intensity data. The latter displays the three-dimensional diffraction data with searched or predicted peak positions and is used to determine and confirm integration regions. STARGazer has been developed to make it easier to use and to obtain more accurate intensity data. For example, a profile-fitting method for peak integration was developed and the data statistics were improved. STARGazer and its manual, containing installation and data-processing components, have been prepared and provided to iBIX users. This article describes the status of the STARGazer data-processing software and its data-processing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
38

Mubarok, Farid, Harliana Harliana, and Ijah Hadijah. "Perbandingan Antara Metode RUP dan Prototype Dalam Aplikasi Penerimaan Siswa Baru Berbasis Web." Creative Information Technology Journal 2, no. 2 (April 4, 2015): 114. http://dx.doi.org/10.24076/citec.2015v2i2.42.

Full text
Abstract:
The system of registration and enrollment process that is still using the manual method, the manual registration form hereinafter be recapitulated in the ledger by the committee PSB into the computer. Because the number of applicants who pretty much and continues to grow, so that the input process, data processing and delivery of information resulting from the admission requires substantial time. To overcome this, they invented a new student enrollment applications to facilitate web-based committee in the data processing new admissions and delivery of information with analytical comparison of two methods of software development is the method of RUP and Prototype. In order to obtain a new student registration system with software development method is more accurate in making new student enrollment applications and web-based delivery of desired results in the form of information and reports quickly.The system of registration and enrollment process that is still using the manual method, the manual registration form hereinafter be recapitulated in the ledger by the committee PSB into the computer. Because the number of applicants who pretty much and continues to grow, so that the input process, data processing and delivery of information resulting from the admission requires substantial time. To overcome this, they invented a new student enrollment applications to facilitate web-based committee in the data processing new admissions and delivery of information with analytical comparison of two methods of software development is the method of RUP and Prototype. In order to obtain a new student registration system with software development method is more accurate in making new student enrollment applications and web-based delivery of desired results in the form of information and reports quickly.
APA, Harvard, Vancouver, ISO, and other styles
39

Wulandari, Putri, Weni Lestari Putri, and Pratiwi Hendro Wahyudiono. "Sistem Informasi Pengolahan Data Peternakan Ayam Kampung Pada CV. Dua Saudara Berbasis Web Mobile." Journal of Engineering, Technology, and Applied Science 4, no. 1 (April 24, 2022): 14–21. http://dx.doi.org/10.36079/lamintang.jetas-0401.351.

Full text
Abstract:
CV. Dua saudara merupakan usaha yang bergerak dalam bidang industri peternakan ayam khususnya ayam kampung pedaging yang berlokasi di Tanjung Pinang. Proses pengolahan data masih dilakukan dengan pencatatan secara manual, sehingga menyebabkan karyawan lama atau kurang efektif dalam melakukan pekerjaan. Ditambah lagi banyaknya data yang akan diolah maka kemungkinan salah pada saat proses pencacatan manual sangat besar. Tujuan dari peneliatian ini adalah untuk membuat sebuah sistem informasi pengolahan data peternakan ayam menggunakan Bootstrap, PHP , dan MySql sebagai database. Sistem ini dirancang berbasis web mobile dengan menggunakan metode system development life cycle (SDLC) yaitu: perencanaan, analisis, desain, implementasi, dan pemeliharaan serta menggunakan beberapa metode pengumpulan data yaitu studi pustaka, observasi,dan wawancara dengan pemilik usaha. Pemodelan yang digunakan yaitu Data Flow Diagram (DFD) dan Entity Relationship Diagram (ERD). Hasil dari penelitian ini diharapkan dapat memudahkan karyawan dan pemilik CV. Dua Saudara agar dapat mengolah dan mengakses data dengan mudah dan mampu membuat pengolahan data menjadi lebih efektif dan efisien. Information System for Data Processing of Ayam Kampung Farms on CV. Dua Saudara Mobile Web Based Abstract: CV. Dua Saudara is a business engaged in the chicken industry, especially broiler chickens located in Tanjung Pinang. The data processing process is still done by recording manually, thus causing old employees or less effective in doing work. In addition, there is a large amount of data to be processed, so the possibility of errors during the manual logging process is very large. The purpose of this research is to create an information system for processing chicken farm data using Bootstrap, PHP, and MySql as databases. This system is designed based on mobile web using the System Development Life Cycle (SDLC) method, namely: planning, analysis, design, implementation, and maintenance as well as using several data collection methods, namely literature study, observation, and interviews with business owners. The modeling used is Data Flow Diagram (DFD) and Entity Relationship Diagram (ERD). The results of this study are expected to facilitate employees and CV owners Dua Saudara can process and access data easily and be able to make data processing more effective and efficient. Keywords: Blackbox Testing, Information System, Web Mobile.
APA, Harvard, Vancouver, ISO, and other styles
40

Windarto, Yudi Eko, Ike Pertiwi Windasari, and Moh Aufal Marom Arrozi. "IMPLEMENTASI SIMPLE MULTI ATTRIBUTE RATING TECHNIQUE UNTUK PENENTUAN TEMPAT PEMBUANGAN AKHIR." Jurnal Pengembangan Rekayasa dan Teknologi 15, no. 1 (July 16, 2019): 12. http://dx.doi.org/10.26623/jprt.v15i1.1484.

Full text
Abstract:
<p><em>Garbage is a difficult problem to solve. Data in the Environmental sector still uses traditional or manual methods in processing. Data processing needs to be done to get accurate data and spreading the information more effectively and efficiently. Traditional or manual methods are irrelevant. It needs a system that makes data processing more effective, efficient and accurate.</em></p><p><em>Simple Multi Attribute Ranking Technique (SMART) is one of various Decision Support System methods that can provide assistance, efficiency and accuracy when processing of data. This method will be implemented in data processing to determine the best Final Disposal Areas (FDA) in the districts and will showing visually using a Geographic Information System (GIS). With the presence of this system, it will provide an overview of the best areas for final disposal areas to be built in the districts. In this case, data processing is carried out in Pemalang District, Central Java Province.</em></p><p><em>The fundamental difference between the SMART Method and the other methods is its simplicity and efficiency in processing multi criteria data. Parameters that greatly affect the ideal or are Land Use and Hydrogeology. This Geographic Information System provides the main menu, criteria weight values, relative weight values, alternative values and criteria, assessment and mapping factor values. </em></p>
APA, Harvard, Vancouver, ISO, and other styles
41

Nasihin, Muhamad. "Penerapan Zahir Accounting Untuk Pengelolaan Data Akutansi Pada CV. Kevindo Auto." Jurnal Sistem Informasi Akuntansi 2, no. 1 (March 4, 2021): 72–78. http://dx.doi.org/10.31294/justian.v2i01.394.

Full text
Abstract:
CV. Kevindo Auto is a company engaged in the automotive sale and purchase of used cars. Accounting data processing system at CV. Kevindo Auto is still done by manual recording. This can lead to high risks that may occur with this manual recording system, including frequent errors in recording transactions, difficulty finding transaction evidence and losing proof of transactions that occur at the company. CV. Kevindo Auto requires a computerized system that will be able to provide convenience in recording all transactions that occur up to the reporting process. The use of the zahir accounting program is the right solution for solving problems that occur in this company. Using the zahir accounting program can shorten the time and simplify the process of managing accounting data. Accounting data stored on a computer makes filing safer and easier and faster in the reporting process.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Hong, Lixia Hao, Amit Sharma, and Ashima Kukkar. "Automatic control of computer application data processing system based on artificial intelligence." Journal of Intelligent Systems 31, no. 1 (January 1, 2022): 177–92. http://dx.doi.org/10.1515/jisys-2022-0007.

Full text
Abstract:
Abstract To shorten the travel time and improve comfort, the automatic train driving system is considered to replace manual driving. In this article, an automatic control method of computer application data-processing system based on artificial intelligence is proposed. An automatic train operation (ATO) introduced the structure and function of an autopilot system (train), optimized the train running on the target curve, introduced the basic principle of fuzzy generalized predictive control (PC) algorithm, and combined with the characteristics of ATO system design the speed controller based on optimization algorithm, the target curve to make use of the designed controller to track and simulation validation. The experimental outcomes demonstrate that when the train runs to 90 s, the displacement difference reaches about 40 m, which proves that the fuzzy PC has better displacement tracking, punctual arrival, and higher stopping accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Deibe, David, Margarita Amor, and Ramón Doallo. "Big Data Geospatial Processing for Massive Aerial LiDAR Datasets." Remote Sensing 12, no. 4 (February 21, 2020): 719. http://dx.doi.org/10.3390/rs12040719.

Full text
Abstract:
For years, Light Detection and Ranging (LiDAR) technology has been considered as a challenge when it comes to developing efficient software to handle the extremely large volumes of data this surveying method is able to collect. In contexts such as this, big data technologies have been providing powerful solutions for distributed storage and computing. In this work, a big data approach on geospatial processing for massive aerial LiDAR point clouds is presented. By using Cassandra and Spark, our proposal is intended to support the execution of any kind of heavy time-consuming process; nonetheless, as an initial case of study, we have focused on fast ground-only rasters obtention to generate digital terrain models (DTMs) from massive LiDAR datasets. Filtered clouds obtained from the isolated processing of adjacent zones may exhibit errors located on the boundaries of the zones in the form of misclassified points. Usually, this type of error is corrected through manual or semi-automatic procedures. In this work, we also present an automated strategy for correcting errors of this type, improving the quality of the classification process and the DTMs obtained while minimizing user intervention. The autonomous nature of all computing stages, along with the low processing times achieved, opens the possibility of considering the system as a highly scalable service-oriented solution for on-demand DTM generation or any other geospatial process. Said solution would be a highly useful and unique service for many users in the LiDAR field, and one which could get near to real-time processing with appropriate computational resources.
APA, Harvard, Vancouver, ISO, and other styles
44

Skvarekova, Erika, Gabriel Wittenberger, Andrea Senova, Tomas Bakalar, and Rastislav Harcarufka. "An Innovative Process for Efficient Data Evaluation in an Atmospheric Geochemical Survey of Contaminated Soil." Civil and Environmental Engineering Reports 30, no. 4 (December 1, 2020): 173–87. http://dx.doi.org/10.2478/ceer-2020-0058.

Full text
Abstract:
Abstract The article focuses on the field of innovative trends for efficient data processing in the conduct of research of organic soil pollution using a soil air analyzer for surveys of soil contamination in situ at industrial enterprises in Slovakia. The content of the article is a discourse of theoretical knowledge from the field of the geological environment; the authors’ own survey to monitor the processing and evaluation of the measured values obtained (e.g., CO2, CH4, NEL, BTEX). Currently, standard data processing procedures using the software that is supplied have basic or limited functionality, and the processing time is several hours, including manual and repetitive tasks. As we present in the article, the new Windows PowerShell tool is being used more efficiently, reducing the data processing time which represents an 86% time saving. There is currently no suitable or faster way of evaluating the measured data in Slovakia and the Czech Republic.
APA, Harvard, Vancouver, ISO, and other styles
45

Cowley, Benjamin U., Jussi Korpela, and Jari Torniainen. "Computational testing for automated preprocessing: a Matlab toolbox to enable large scale electroencephalography data processing." PeerJ Computer Science 3 (March 6, 2017): e108. http://dx.doi.org/10.7717/peerj-cs.108.

Full text
Abstract:
Electroencephalography (EEG) is a rich source of information regarding brain function. However, the preprocessing of EEG data can be quite complicated, due to several factors. For example, the distinction between true neural sources and noise is indeterminate; EEG data can also be very large. The various factors create a large number of subjective decisions with consequent risk of compound error. Existing tools present the experimenter with a large choice of analysis methods. Yet it remains a challenge for the researcher to integrate methods for batch-processing of the average large datasets, and compare methods to choose an optimal approach across the many possible parameter configurations. Additionally, many tools still require a high degree of manual decision making for, e.g. the classification of artefacts in channels, epochs or segments. This introduces extra subjectivity, is slow and is not reproducible. Batching and well-designed automation can help to regularise EEG preprocessing, and thus reduce human effort, subjectivity and consequent error. We present the computational testing for automated preprocessing (CTAP) toolbox, to facilitate: (i) batch-processing that is easy for experts and novices alike; (ii) testing and manual comparison of preprocessing methods. CTAP extends the existing data structure and functions from the well-known EEGLAB toolbox, based on Matlab and produces extensive quality control outputs. CTAP is available under MIT licence fromhttps://github.com/bwrc/ctap.
APA, Harvard, Vancouver, ISO, and other styles
46

Ferreira De Menezes, Daniel, Eduardo Da Silva Felix, Jaqueline Silva De Souza Pinheiro, and Zaida Maria Marques Tavares. "IMPACTS OF TECHNOLOGY IN MANUAL PROCESSES IN THE MIDST OF ORGANIZATIONAL SECURITY." International Journal of Advanced Research 10, no. 11 (November 30, 2022): 1271–80. http://dx.doi.org/10.21474/ijar01/15790.

Full text
Abstract:
The article aims to study a military organization and its management of vehicle and pedestrian traffic in the old format, both collection and processing and storage of this data, and propose a solution that facilitates such processes through a mobile application through the android operating system and the use of technologies belonging to smartphones, such as cameras, QRcode for a complete and enjoyable experience for both users and people responsible for the organizational management of the site. Usability tests were performed in the field where the user was allowed to have access to the application in full operation to generate traffic data during its use. Finally, it is concluded that the application provided better management, safety, comfort and better data processing.
APA, Harvard, Vancouver, ISO, and other styles
47

Хомутов, Сергей, and Sergey Khomutov. "XVI IAGA workshop on geomagnetic observatory instruments, data acquisition and processing. Hyderabad, India, October 2014: Brief review XVI IAGA workshop on geomagnetic observatory in-struments, data acquisition and processing. Hyderabad, India, October 2014: Brief review." Solnechno-Zemnaya Fizika 1, no. 4 (December 17, 2015): 86–89. http://dx.doi.org/10.12737/13572.

Full text
Abstract:
The brief review of the XVI IAGA Workshop on Geomagnetic Observatory Instruments Data Acquisition and Processing (Hyderabad, India, October 2014) is presented. Much attention is given to new magnetometers and software for practical work of magnetologists as well as to archive data. Reports on new devices point to the tendency that in the near future, the technique for obtaining the total field vector data adopted by INTERMAGNET will remain changeless as the combination of absolute (manual) and variation measurements. Besides, a low interest of the community to software necessary for full processing of magnetic measurements directly in observatories should be also noticed.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhao, Jianghong, Yan Dong, Siyu Ma, Huajun Liu, Shuangfeng Wei, Ruiju Zhang, and Xi Chen. "An Automatic Density Clustering Segmentation Method for Laser Scanning Point Cloud Data of Buildings." Mathematical Problems in Engineering 2019 (July 7, 2019): 1–13. http://dx.doi.org/10.1155/2019/3026758.

Full text
Abstract:
Segmentation is an important step in point cloud data feature extraction and three-dimensional modelling. Currently, it is also a challenging problem in point cloud processing. There are some disadvantages of the DBSCAN method, such as requiring the manual definition of parameters and low efficiency when it is used for large amounts of calculation. This paper proposes the AQ-DBSCAN algorithm, which is a density clustering segmentation method combined with Gaussian mapping. The algorithm improves upon the DBSCAN algorithm by solving the problem of automatic estimation of the parameter neighborhood radius. The improved algorithm can carry out density clustering processing quickly by reducing the amount of computation required.
APA, Harvard, Vancouver, ISO, and other styles
49

Gupta, Rajeev, Jon D. Fricker, and David P. Moffett. "Reduction of Video License Plate Data." Transportation Research Record: Journal of the Transportation Research Board 1804, no. 1 (January 2002): 31–38. http://dx.doi.org/10.3141/1804-05.

Full text
Abstract:
Video license plate surveys have been used for more than a decade in Indiana to help produce origin-destination tables in corridors and small areas. In video license plate surveys, license plate images are captured on videotape for data reduction at the analyst’s office. In most cases, the letters and numbers on a license plate are manually transcribed to a data file. This manual process is tedious, time-consuming, and expensive. Although automated license plate readers are being implemented with success elsewhere, their dependence on high-end equipment makes them too expensive for most applications in Indiana. Presented are the results of an attempt to use standard video cameras and tapes, readily available video processing equipment, and open-source software to minimize the human role in the data reduction process and thus reduce the expenses involved. The process of automatically transcribing video data can be divided into subprocesses. Analog video data are digitized and stored on a computer hard disk. The resulting digital images are further processed, by using image-processing algorithms, to locate and extract the license plate and time stamp information. Character recognition techniques can then be applied to read the license plate number into an electronic file for the desired analysis. The described video license plate data reduction (VLPDR) software can identify video frames that contain vehicles and discard the remaining frames. VLPDR can locate and read the time stamps in most of these frames. Although VLPDR cannot read the license plate numbers into a data file, this final step is made easier by a user-friendly graphical user interface. VLPDR saves a significant amount of manual data reduction. The amount of labor saved depends on the parameters chosen by the user.
APA, Harvard, Vancouver, ISO, and other styles
50

Kuimova, Olga, Vladislav Kukartsev, Artem Stupin, Ekaterina Markevich, and Stanislav Apanasenko. "Using machine learning methods in problems with large amounts of data." SHS Web of Conferences 116 (2021): 00080. http://dx.doi.org/10.1051/shsconf/202111600080.

Full text
Abstract:
This article explores the use of artificial intelligence in medicine, in particular in radiology, pathology, drug development. The usefulness of robotic assistants in the medical field is revealed, including machine learning in medical science, as well as routing in hospitals. It also discusses such machine learning methods as classification methods, regression restoration methods, clustering methods. As a result, based on what is considered in this article, it is concluded that manual processing becomes more complicated and impossible with a large amount of data. There is a need for automatic processing that can transform modern medicine. And also, conclusions were made about how accurately the deep learning mechanisms can provide a more accurate result in the processing and classification of images compared to the results obtained at the human level. It became clear that deep learning not only aids in the selection and extraction of characteristics, but also has the potential to measure predictive target audiences and provide proactive predictions to help clinicians go a long way.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography