Journal articles on the topic 'Electronic data processing departments'

To see the other types of publications on this topic, follow the link: Electronic data processing departments.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Electronic data processing departments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhu, Yongjie, and Youcheng Li. "A Data Sharing and Integration Technology for Heterogeneous Databases." International Journal of Circuits, Systems and Signal Processing 16 (January 10, 2022): 232–38. http://dx.doi.org/10.46300/9106.2022.16.28.

Full text
Abstract:
For a long time, there are a large number of heterogeneous databases on the network, and their heterogeneity is manifested in many aspects. With the development of enterprise informatization and e-government, the system database of each department constitutes a real heterogeneous database framework with its independence and autonomy in the network system of many different functional departments. This paper will design information sharing between heterogeneous databases of network database system of many similar functional departments by using XML data model. The solution of data sharing between heterogeneous databases can accelerate the integration of information systems with departments and businesses as the core among enterprises, form a broader and more efficient organic whole, improve the speed of business processing, broaden business coverage, and strengthen cooperation and exchange among enterprises. In addition, heterogeneous database sharing can avoid the waste of data resources caused by the heterogeneity of database, and promote the availability rate of data resources. Due to the advantages of XML data model, the system has good scalability.
APA, Harvard, Vancouver, ISO, and other styles
2

Altamimi, Mohammed H., Maalim A. Aljabery, and Imad S. Alshawi. "Big Data Framework Classification for Public E-Governance Using Machine Learning Techniques." Basrah Researches Sciences 48, no. 2 (December 30, 2022): 112–22. http://dx.doi.org/10.56714/bjrs.48.2.11.

Full text
Abstract:
Using Machine Learning (ML) in many fields has shown remarkable results, especially in government data analysis, classification, and prediction. This technology has been applied to the National ID data (Electronic Civil Registry) (ECR). It is used in analyzing this data and creating an e-government project to join the National ID with three government departments (Military, Social Welfare, and Statistics_ Planning). The proposed system works in two parts: Online and Offline at the same time; based on five (ML) algorithms: Support Vector Machine (SVM), Decision Tree (DT), K-Nearest Neighbor (KNN), Random Forest (RF), and Naive Bayes (NB). The system offline part applies the stages of pre-processing and classification to the ECR and then predicts what government departments need in the online part. The system chooses the best classification algorithm, which shows perfect results for each government department when online communication is made between the department and the national ID. According to the simulation results of the proposed system, the accuracy of the classifications is around 100%, 99%, and 100% for the military department by the SVM classifier, the social welfare department by the RF classifier, and the statistics-planning department by the SVM classifier, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Dudeck, J., G. Junghans, K. Marquardt, P. Sebald, A. Michel, and H. U. Prokosch. "WING – Entering a New Phase of Electronic Data Processing at the Gießen University Hospital." Methods of Information in Medicine 30, no. 04 (1991): 289–98. http://dx.doi.org/10.1055/s-0038-1634851.

Full text
Abstract:
AbstractAt the Gielßen University Hospital electronic data processing systems have been in routine use since 1975. In the early years developments were focused on ADT functions (admission/discharge/transfer) and laboratory systems. In the next decade additional systems were introduced supporting various functional departments. In the mid-eighties the need to stop the ongoing trend towards more and more separated standalone systems was realized and it was decided to launch a strategic evaluation and planning process which sets the foundation for an integrated hospital information system (HIS). The evaluation of the HELP system for its portability into the German hospital environment was the first step in this process. Despite its recognized capabilities in integrating decision support and communication technologies, and its powerful HIS development tools, the large differences between American and German hospital organization, influencing all existing HELP applications, and the incompatibility of the HELP tools with modern software standards were two important factors forcing the investigation of alternative solutions. With the HELP experience in mind, a HIS concept for the Gießen University Hospital was developed. This new concept centers on the idea of a centralized relational patient database on a highly reliable database server, and clinical front-end applications which might be running on various other computer systems (mainframes, departmental UNIX satellites or PCs in a LAN) integrated into a comprehensive open HIS network. The first step towards this integrated approach was performed with the implementation of ADT and results reporting functions on care units.
APA, Harvard, Vancouver, ISO, and other styles
4

Samoylov, P. A. "ELECTRONIC CRIME INCIDENT REPORT AS A REASON TO INITIATE A CRIMINAL CASE (THE COMPARATIVE ANALYSIS OF LEGAL REGULATION IN THE NORMATIVE ACTS OF THE RUSSIAN MINISTRY OF INTERNAL AFFAIRS AND THE RF CRIMINAL PROCEDURE CODE)." Vektor nauki Tol’attinskogo gosudarstvennogo universiteta. Seria Uridicheskie nauki, no. 2 (2021): 44–50. http://dx.doi.org/10.18323/2220-7457-2021-2-44-50.

Full text
Abstract:
The integration and active application of electronic document flow to the daily activities of the police have consistently and logically led to the fact that the electronic crime incident report is increasingly used as a reason to initiate criminal cases. The departmental normative legal acts of the Ministry of Internal Affairs of Russia regulate in detail the processing of such reports. However, under the RF Criminal Procedure Code, not all electronic crime reports registered by the Departments of Internal Affairs meet the established requirements, and, accordingly, they can not perform the function of a criminal procedural cause. In this situation, with the obvious relevance of electronic documents, an example of a contradiction and gap in the law is evident, which somewhat hinders the development of electronic interaction between the participants of criminal procedural activity and can cause negative consequences. The paper analyzes and compares the provisions of some normative sources regulating the reception and consideration of electronic crime reports by the Departments of Internal Affairs of the Russian Federation and the norms of criminal procedural legislation. The author critically evaluates the legal definitions of the concept of a crime incident report and some organizational and legal mechanisms for accepting and considering electronic crime reports established by the departmental legal acts of the Ministry of Internal Affairs of the Russian Federation. The study highlights and clarifies the rules of filing, mandatory requisites, and some other requirements for electronic crime reports, which must be complied with according to the provisions of the criminal procedure code. Based on the data obtained, the author offers recommendations to improve criminal procedural law and the algorithm of accepting electronic crime reports using the official websites of the Departments of the Ministry of Internal Affairs of the Russian Federation.
APA, Harvard, Vancouver, ISO, and other styles
5

Zavyalov, Aleksandr A., and Dmitry A. Andreev. "Management of the radiotherapy quality control using automated Big Data processing." Health Care of the Russian Federation 64, no. 6 (December 30, 2020): 368–72. http://dx.doi.org/10.46563/0044-197x-2020-64-6-368-372.

Full text
Abstract:
Introduction. In Moscow, the state-of-the-art information technologies for cancer care data processing are widely used in routine practice. Data Science approaches are increasingly applied in the field of radiation oncology. Novel arrays of radiotherapy performance indices can be introduced into real-time cancer care quality and safety monitoring. The purpose of the study. The short review of the critical structural elements of automated Big Data processing and its perspectives in the light of the internal quality and safety control organization in radiation oncology departments. Material and methods. The PubMed (Medline) and E-Library databases were used to search the articles published mainly in the last 2-3 years. In total, about 20 reports were selected. Results. This paper highlights the applicability of the next-generation Data Science approaches to quality and safety assurance in radiation oncological units. The structural pillars for automated Big Data processing are considered. Big Data processing technologies can facilitate improvements in quality management at any radiotherapy stage. Simultaneously, the high requirements for quality and integrity across indices in the databases are crucial. Detailed dose data may also be linked to outcomes and survival indices integrated into larger registries. Discussion. Radiotherapy quality control could be automated to some extent through further introduction of information technologies making comparisons of the real-time quality measures with digital targets in terms of minimum norms / standards. The implementation of automated systems generating early electronic notifications and rapid alerts in case of serious quality violation could drastically improve the internal medical processes in local clinics. Conclusion. The role of Big Data tools in internal quality and safety control will dramatically increase over time.
APA, Harvard, Vancouver, ISO, and other styles
6

Unwin, Elizabeth, James Codde, Louise Gill, Suzanne Stevens, and Timothy Nelson. "The WA Hospital Morbidity Data System: An Evaluation of its Performance and the Impact of Electronic Data Transfer." Health Information Management 26, no. 4 (December 1996): 189–92. http://dx.doi.org/10.1177/183335839702600407.

Full text
Abstract:
This paper evaluates the performance of the Hospital Morbidity Data System, maintained by the Health Statistics Branch (HSB) of the Health Department of Western Australia (WA). The time taken to process discharge summaries was compared in the first and second halves of 1995, using the number of weeks taken to process 90% of all discharges and the percentage of records processed within four weeks as indicators of throughput. Both the hospitals and the HSB showed improvements in timeliness during the second half of the year. The paper also examines the impact of a recently introduced electronic data transfer system for WA country public hospitals on the timeliness of morbidity data. The processing time of country hospital records by the HSB was reduced to a similar time as for metropolitan hospitals, but the processing time in the hospitals increased, resulting in little improvement in total processing time.
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Chen. "Quality Management System for Clinical Nutrition: On the processing of the Artificial Intelligence into Quality Assessment." Nutrition and Food Processing 04, no. 03 (May 26, 2021): 01–06. http://dx.doi.org/10.31579/2637-8914/038.

Full text
Abstract:
Objective: To critically evaluate the Quality Management System (QMS) for Clinical Nutrition (CN) in Jiangsu. Monitor its performance in quality assessment as well as human resource management from nutrition aspect. Investigate the appliance and development of Artificial Intelligence (AI) in medical quality control. Subjects: The study source of this research was all the staffs of 70 Clinical Nutrition Department (CND) of the tertiary hospitals in Jiangsu Province, China. These departments are all members of the Quality Management System of Clinical Nutrition in Jiangsu (QMSNJ). Methods: An online survey was conducted on all 341 employees within all these CNDs based on the staff information from the surveyed medical institutions. The questionnaire contains 5 aspects, while data analysis and AI evaluation were focused on human resource information. Results: 330 questionnaires were collected with the respondent rate of 96.77%. The QMS for CN has been build up for CNDs in Jiangsu, which achieved its target in human resource improvements, especially among dietitians. The increasing number of participated departments (42.8%) and the significant growth of dietitians (p=0.02, t=-0.42) are all expressions of the advancements of QMSNJ. Conclusion: As the first innovation of an online platform for QM in Jiangsu, JPCNMP has been successfully implemented among QMS from this research. This multidimensional electronic system can help QMSNJ and CND achieve quality assessment from various aspects, so as to realize the continuous improvement of clinical nutrition. The instrument of online platform, as well as AI technology for quality assessment is worth to be recommended and promoted in the future. Strengths This is the first evaluation of the online QM platform after its implementation in daily disciplinary management among the QMS in china. This research has been designed to investigate the status of CND multidimensionally. This analysis is emphasizing on the human resource approvement after the designation and application of QMS. A clearer forecast of AI in medical quality assessment and disciplinary construction was achieved, while some modifications are recommended in human resource management to improve its efficiency and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
8

Gerner-Smidt, Peter, Lise Hansen, Anna Knudsen, K. Siboni, and I. Søgaard. "Epidemic spread of Acinetobacter calcoaceticus in a neurosurgical department analyzed by electronic data processing." Journal of Hospital Infection 6, no. 2 (June 1985): 166–74. http://dx.doi.org/10.1016/s0195-6701(85)80094-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

GERNERSMIDT, P. "Epidemic spread of Acinetobacter calcoaceticus in a neurosurgical department analyzed by electronic data processing." Journal of Hospital Infection 6, no. 2 (June 1985): 166–74. http://dx.doi.org/10.1016/0195-6701(85)90008-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

MacPhaul, Erin, Li Zhou, Stephen J. Mooney, Deborah Azrael, Andrew Bowen, Ali Rowhani-Rahbar, Ravali Yenduri, Catherine Barber, Eric Goralnick, and Matthew Miller. "Classifying Firearm Injury Intent in Electronic Hospital Records Using Natural Language Processing." JAMA Network Open 6, no. 4 (April 6, 2023): e235870. http://dx.doi.org/10.1001/jamanetworkopen.2023.5870.

Full text
Abstract:
ImportanceInternational Classification of Diseases–coded hospital discharge data do not accurately reflect whether firearm injuries were caused by assault, unintentional injury, self-harm, legal intervention, or were of undetermined intent. Applying natural language processing (NLP) and machine learning (ML) techniques to electronic health record (EHR) narrative text could be associated with improved accuracy of firearm injury intent data.ObjectiveTo assess the accuracy with which an ML model identified firearm injury intent.Design, Setting, and ParticipantsA cross-sectional retrospective EHR review was conducted at 3 level I trauma centers, 2 from health care institutions in Boston, Massachusetts, and 1 from Seattle, Washington, between January 1, 2000, and December 31, 2019; data analysis was performed from January 18, 2021, to August 22, 2022. A total of 1915 incident cases of firearm injury in patients presenting to emergency departments at the model development institution and 769 from the external validation institution with a firearm injury code assigned according to International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) or International Statistical Classification of Diseases and Related Health Problems, 10th Revision, Clinical Modification (ICD-10-CM), in discharge data were included.ExposuresClassification of firearm injury intent.Main Outcomes and MeasuresIntent classification accuracy by the NLP model was compared with ICD codes assigned by medical record coders in discharge data. The NLP model extracted intent-relevant features from narrative text that were then used by a gradient-boosting classifier to determine the intent of each firearm injury. Classification accuracy was evaluated against intent assigned by the research team. The model was further validated using an external data set.ResultsThe NLP model was evaluated in 381 patients presenting with firearm injury at the model development site (mean [SD] age, 39.2 [13.0] years; 348 [91.3%] men) and 304 patients at the external development site (mean [SD] age, 31.8 [14.8] years; 263 [86.5%] men). The model proved more accurate than medical record coders in assigning intent to firearm injuries at the model development site (accident F-score, 0.78 vs 0.40; assault F-score, 0.90 vs 0.78). The model maintained this improvement on an external validation set from a second institution (accident F-score, 0.64 vs 0.58; assault F-score, 0.88 vs 0.81). While the model showed some degradation between institutions, retraining the model using data from the second institution further improved performance on that site’s records (accident F-score, 0.75; assault F-score, 0.92).Conclusions and RelevanceThe findings of this study suggest that NLP ML can be used to improve the accuracy of firearm injury intent classification compared with ICD-coded discharge data, particularly for cases of accident and assault intents (the most prevalent and commonly misclassified intent types). Future research could refine this model using larger and more diverse data sets.
APA, Harvard, Vancouver, ISO, and other styles
11

Zörög, Zoltán, Tamás Csomós, and Csaba Szűcs. "ERP systems in higher education." Applied Studies in Agribusiness and Commerce 6, no. 3-4 (November 30, 2012): 103–9. http://dx.doi.org/10.19041/apstract/2012/3-4/14.

Full text
Abstract:
In the past few decades data processing and in-company communication has changed significantly. First there were only a few computers purchased at companies, therefore departments developed applications that covered corporate administration which lead to so called isolated solutions. These days with the spread of electronic data processing the greatest problem for companies is not gaining information – since they can be found in all sorts of databases and data warehouses as internal or external information – rather producing information that is necessary in a given situation. What can help to solve this situation? It is informatics, more precisely ERP systems which have substituted software that provided isolated solutions at companies for decades. System based thinking is important in their application beside the fact that only data absolutely necessary for managerial decisions must be produced. This paper points out why we consider practice oriented teaching of ERP systems in higher education important.
APA, Harvard, Vancouver, ISO, and other styles
12

Rakhmatullaev, Marat, and Uktam Karimov. "MODELS OF INTEGRATION OF INFORMATION SYSTEMS IN HIGHER EDUCATION INSTITUTIONS." SOCIETY. INTEGRATION. EDUCATION. Proceedings of the International Scientific Conference 5 (May 25, 2018): 420–29. http://dx.doi.org/10.17770/sie2018vol1.3308.

Full text
Abstract:
At present a lot of automated systems are developing and implementing to support the educational and research processes in the universities. Often these systems duplicate some functions, databases, and also there are problems of compatibility of these systems. The most common educational systems are systems for creating electronic libraries, access to scientific and educational information, a program for detecting plagiarism, testing knowledge, etc. In this article, models and solutions for the integration of such educational automated systems as the information library system (ILS) and the anti-plagiarism system are examined. Integration of systems is based on the compatibility of databases, if more precisely in the metadata of different information models. At the same time, Cloud technologies are used - data processing technology, in which computer resources are provided to the user of the integrated system as an online service. ILS creates e-library of graduation papers and dissertations on the main server. During the creation of the electronic catalog, the communication format MARC21 is used. The database development is distributed for each department. The subsystem of anti-plagiarism analyzes the full-text database for the similarity of texts (dissertations, diploma works and others). Also it identifies the percentage of coincidence, creates the table of statistical information on the coincidence of tests for each author and division, indicating similar fields. The integrated system was developed and tested at the Tashkent University of Information Technologies to work in the corporate mode of various departments (faculties, departments, TUIT branches).
APA, Harvard, Vancouver, ISO, and other styles
13

Zheng, Xuying, Fang Miao, Piyachat Udomwong, and Nopasit Chakpitak. "Registered Data-Centered Lab Management System Based on Data Ownership Safety Architecture." Electronics 12, no. 8 (April 11, 2023): 1817. http://dx.doi.org/10.3390/electronics12081817.

Full text
Abstract:
University and college laboratories are important places to train professional and technical personnel. Various regulatory departments in colleges and universities still rely on traditional laboratory management in research projects, which are prone to problems such as untimely information and data transmission. The present study aimed to propose a new method to solve the problem of data islands, explicit ownership, conditional sharing, data safety, and efficiency during laboratory data management. Hence, this study aimed to develop a data-centered lab management system that enhances the safety of lab data management and allows the data owners of the labs to control data sharing with other users. The architecture ensures data privacy by binding data ownership with a person using a key management method. To achieve data flow safely, data ownership conversion through the process of authorization and confirmation was introduced. The designed lab management system enables laboratory regulatory departments to receive data in a secure form by using this platform, which could solve data sharing barriers. Finally, the proposed system was applied and run in different server environments by implementing data security registration, authorization, confirmation, and conditional sharing using SM2, SM4, RSA, and AES algorithms. The system was evaluated in terms of the execution time for several lab data with different sizes. The findings of this study indicate that the proposed strategy is safe and efficient for lab data sharing across domains.
APA, Harvard, Vancouver, ISO, and other styles
14

Masud, Jakir Hossain Bhuiyan, Chiang Shun, Chen-Cheng Kuo, Md Mohaimenul Islam, Chih-Yang Yeh, Hsuan-Chia Yang, and Ming-Chin Lin. "Deep-ADCA: Development and Validation of Deep Learning Model for Automated Diagnosis Code Assignment Using Clinical Notes in Electronic Medical Records." Journal of Personalized Medicine 12, no. 5 (April 28, 2022): 707. http://dx.doi.org/10.3390/jpm12050707.

Full text
Abstract:
Currently, the International Classification of Diseases (ICD) codes are being used to improve clinical, financial, and administrative performance. Inaccurate ICD coding can lower the quality of care, and delay or prevent reimbursement. However, selecting the appropriate ICD code from a patient’s clinical history is time-consuming and requires expert knowledge. The rapid spread of electronic medical records (EMRs) has generated a large amount of clinical data and provides an opportunity to predict ICD codes using deep learning models. The main objective of this study was to use a deep learning-based natural language processing (NLP) model to accurately predict ICD-10 codes, which could help providers to make better clinical decisions and improve their level of service. We retrospectively collected clinical notes from five outpatient departments (OPD) from one university teaching hospital between January 2016 and December 2016. We applied NLP techniques, including global vectors, word to vectors, and embedding techniques to process the data. The dataset was split into two independent training and testing datasets consisting of 90% and 10% of the entire dataset, respectively. A convolutional neural network (CNN) model was developed, and the performance was measured using the precision, recall, and F-score. A total of 21,953 medical records were collected from 5016 patients. The performance of the CNN model for the five different departments was clinically satisfactory (Precision: 0.50~0.69 and recall: 0.78~0.91). However, the CNN model achieved the best performance for the cardiology department, with a precision of 69%, a recall of 89% and an F-score of 78%. The CNN model for predicting ICD-10 codes provides an opportunity to improve the quality of care. Implementing this model in real-world clinical settings could reduce the manual coding workload, enhance the efficiency of clinical coding, and support physicians in making better clinical decisions.
APA, Harvard, Vancouver, ISO, and other styles
15

Liang, Rui, and Gaoqing Ji. "Vehicle Detection Algorithm Based on Embedded Video Image Processing in the Background of Information Technology." Journal of Electrical and Computer Engineering 2022 (April 14, 2022): 1–10. http://dx.doi.org/10.1155/2022/6917421.

Full text
Abstract:
As the main means of transportation for urban residents, the number of motor vehicles is increasing year by year. With the continuous development of society and the gradual improvement of people’s quality of life, automobiles have gradually become an indispensable means of transportation in people’s lives, resulting in increased traffic flow. However, the old traffic system is still unable to cope with the rapid growth of traffic pressure, and traffic congestion and various accidents occur frequently, which is a huge test for the contemporary intelligent traffic system. With the gradual development of society, more and more researchers are devoted to intelligent transportation systems, which make the development of target detection technology based on video image processing more and more rapid. The primary problem in embedding digital video in applications is that the complexity of video encoding and decoding far exceeds that of simple image and audio compression and decompression. Digital video can take various forms and formats. Developers need to support complex configurations and various aspects, such as different resolutions/display sizes, different bit rates, real-time issues, and even the reliability of the video source. Intelligent transportation achieved a lot of results. However, there are still some deficiencies in precision and robustness. At the same time, the improvement of video image processing technology gives us a new idea. To further improve the intelligent traffic system, provide accurate data information for all departments, and improve the traffic situation, this study, based on video image processing technology, combined with the three-frame difference algorithm, calculates and studies the data of illegal parking at a certain intersection. The calculated false detection rates for Y2 are 1.1%, 0.9%, and 2.4%, and the leakage rates for Y1 are 2.4%, 1.9%, and 4.7%, respectively. This shows that the algorithm has high accuracy for vehicle parking detection data and can collect information quickly and effectively. Applying the algorithm to the detection of other vehicles can provide efficient services for relevant traffic departments and public security departments and relieve traffic pressure. The image processing technology is a process of analyzing and processing images through certain computer technology to achieve the desired results. The scheme in the article realizes background extraction, image filtering, image binarization, morphological transformation, vehicle detection and segmentation, shadow detection, etc.
APA, Harvard, Vancouver, ISO, and other styles
16

Musaev, Mukhammadjon, Marat Rakhmatullaev, Sherbek Normatov, Kamoliddin Shukurov, and Malika Abdullaeva. "INTEGRATED INTELLIGENT SYSTEM FOR SCIENTIFIC AND EDUCATIONAL INFORMATION RETRIEVAL." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 2 (June 22, 2024): 212–19. http://dx.doi.org/10.17770/etr2024vol2.8028.

Full text
Abstract:
The relevance of creating information systems using artificial intelligence methods and tools is dictated by the following reasons: The volume of scientific and educational information is growing; Traditional information retrieval methods have exhausted themselves. Using only deterministic and iteration methods, rigid algorithms don't give the expected results. They require more time to process information and more memory. Significant progress in recent years in the development of artificial intelligence (AI) methods and systems gives hope that their use will significantly reduce the time needed to search for data for scientific research and educational activities. The aim of the research results presented in the article is to increase the efficiency for scientific and educational information retrieval based on the use of AI methods implemented in the integrated intelligent information system “SMART TUIT". The article presents the results of theoretical and applied research obtained by several departments of the Tashkent University of Information Technology (TUIT) in solving the following tasks: Voice recognition for subsequent processing; Pattern recognition in order to identify the users of information; Search and processing of scientific and educational resources in electronic libraries; Analysis of information needs of users depending on the level of competence and type of activity; Evaluation of scientific and educational information to identify the most important data sources; Geoinformation system to solve the problems of the location of the information source. Initially, each research area in the departments was aimed at solving a certain class of problems related to medicine, linguistics, electronic libraries, corporate networks, information security systems, etc. The TUIT creative group decided to combine efforts to apply the results obtained to solve the important problem of intellectualizing the search for sources of scientific and educational information among a large amount of data.
APA, Harvard, Vancouver, ISO, and other styles
17

Riches, S. T., C. Johnston, M. Sousa, and P. Grant. "High Temperature Endurance of Packaged SOI Devices for Signal Conditioning and Processing Applications." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2011, HITEN (January 1, 2011): 000251–54. http://dx.doi.org/10.4071/hiten-paper8-sriches.

Full text
Abstract:
Silicon on Insulator (SOI) device technology is fulfilling a niche requirement for electronics that functions satisfactorily at operating temperatures of >200°C. Most of the reliability data on the high temperature endurance of the devices is generated on the device itself with little attention being paid to the packaging technology around the device. Similarly, most of the reliability data generated on high temperature packaging technologies uses testpieces rather than real devices, which restricts any conclusions on long term electrical performance. This paper presents results of high temperature endurance studies on SOI devices combined with high temperature packaging technologies relevant to signal conditioning and processing functions for sensors in down-well and aero-engine applications. The endurance studies have been carried out for up to 7,056 hours at 250°C, with functioning devices being tested periodically at room temperature, 125°C and 250°C. Different die attach and wire bond options have been included in the study and the performance of multiplexers, transistors, bandgap voltage, oscillators and voltage regulators functional blocks have been characterised. This work formed part of the UPTEMP project which was set-up with support from UK Technology Strategy Board and the EPSRC. The project brought together a consortium of end-users (Sondex Wireline and Vibro-Meter UK), electronic module manufacturers (GE Aviation Systems Newmarket) and material suppliers (Gwent Electronic Materials and Thermastrate Ltd) with Oxford University-Materials Department, the leading UK high temperature electronics research centre.
APA, Harvard, Vancouver, ISO, and other styles
18

Ding, Jianwei. "Case Investigation Technology Based on Artificial Intelligence Data Processing." Journal of Sensors 2021 (October 26, 2021): 1–9. http://dx.doi.org/10.1155/2021/4942657.

Full text
Abstract:
Through data mining technology, the hidden information behind a large amount of data is discovered, which can help various management services and provide scientific basis for leadership decision-making. It is an important subject of current police information research. This paper conducts in-depth research on the investigation analysis and decision-making of public security cases and proposes a case-based reasoning model based on two case databases. Moreover, this paper discusses in detail the use of data mining technology to automatically establish a case database, which is a useful exploration and practice for the public security department to establish a new and efficient case investigation auxiliary decision-making system. In addition, this paper studies the method of using data mining technology to assist in the establishment of a case database, analyzes the characteristics of traditional case storage methods, and constructs a case investigation model based on artificial intelligence data processing. The research results show that the model constructed in this paper has certain practical effects.
APA, Harvard, Vancouver, ISO, and other styles
19

Stanberry, Ben. "The legal and ethical aspects of telemedicine. 2: Data protection, security and European law." Journal of Telemedicine and Telecare 4, no. 1 (March 1, 1998): 18–24. http://dx.doi.org/10.1258/1357633981931236.

Full text
Abstract:
The electronic record may be subject to abuses that can be carried out on a large scale and cause great damage. A wide range of data protection and information security measures will need to be taken to ensure the quality and integrity of such records. A European Union directive was formally adopted in 1995 which sets the obligations of those responsible for data processing as well as a number of important rights for individuals. The responsible teleconsultant or medical officer, as the data controller, must make sure these measures are enforced. In the case of the transmission of medical records to another location, the original data controller may remain liable for abuses. But as different elements of the records are spread throughout the different departments of a hospital or across different geographical locations, it may become difficult to ascertain who is responsible for protecting and controlling what. To this end, the designation of liability by contractual means, between the hospitals and remote users of a telemedicine network, would be the clearest and most straightforward way of achieving uniformity and predictability in terms of the distribution of responsibility for data protection and security.
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Juan. "Application of Intelligent Archives Management Based on Data Mining in Hospital Archives Management." Journal of Electrical and Computer Engineering 2022 (April 7, 2022): 1–13. http://dx.doi.org/10.1155/2022/6217328.

Full text
Abstract:
Data mining belongs to knowledge discovery, which is the process of revealing implicit, unknown, and valuable information from a large amount of fuzzy application data. The potential information revealed by data mining can help decision makers adjust market strategies and reduce market risks. The information excavated must be real and not universally known, and it can be the discovery of a specific problem. Data mining algorithms mainly include the neural network method, decision tree method, genetic algorithm, rough set method, fuzzy set method, association rule method, and so on. Archives management, also known as archive work, is the general term for various business works, in which archives directly manage archive entities and archive information and provide utilization services. It is also the most basic part of national archives. Hospital archives are an important part of hospital management, and hospital archives are the accumulation of work experience and one of the important elements for building a modern hospital. Hospital archives are documents, work records, charts, audio recordings, videos, photos, and other types of documents, audio-visual materials, and physical materials, such as certificates, trophies, and medals obtained by hospitals, departments, and individuals. The purpose of this paper is to study the application of intelligent archives management based on data mining in hospital archives management, expecting to use the existing data mining technology to improve the current hospital archives management. This paper investigates the age and educational background of hospital archives management workers and explores the relationship between them and the quality of archives management. Based on the decision number algorithm, on the basis of the database, the hospital data is classified and analyzed, and the hospital file data is classified and processed through the decision number algorithm to improve the system data processing capability. The experimental results of this paper show that among the staff working in the archives management department of the hospital, 20-to-30-year-olds account for 46.2% of the total group. According to the data, the staff in the archives management department of the hospital also tends to be younger. Among the staff under the age of 30, the file pass rate was 98.3% and the failure rate was 1.7%. Among the staff over 50 years old, the file pass rate was 99.9% and the failure rate was 0.1%. According to the data, the job is related to the experience of the employee.
APA, Harvard, Vancouver, ISO, and other styles
21

Gagesch, Michael, Karin Edler, Patricia O. Chocano-Bedoya, Lauren A. Abderhalden, Laurence Seematter-Bagnoud, Tobias Meyer, Dominic Bertschi, et al. "Swiss Frailty Network and Repository: protocol of a Swiss Personalized Health Network’s driver project observational study." BMJ Open 11, no. 7 (July 2021): e047429. http://dx.doi.org/10.1136/bmjopen-2020-047429.

Full text
Abstract:
IntroductionEarly identification of frailty by clinical instruments or accumulation of deficit indexes can contribute to improve healthcare for older adults, including the prevention of negative outcomes in acute care. However, conflicting evidence exists on how to best capture frailty in this setting. Simultaneously, the increasing utilisation of electronic health records (EHRs) opens up new possibilities for research and patient care, including frailty.Methods and analysisThe Swiss Frailty Network and Repository (SFNR) primarily aims to develop an electronic Frailty Index (eFI) from routinely available EHR data in order to investigate its predictive value against length of stay and in-hospital mortality as two important clinical outcomes in a study sample of 1000–1500 hospital patients aged 65 years and older. In addition, we will examine the correlation between the eFI and a test-based clinical Frailty Instrument to compare both concepts in Swiss older adults in acute care settings. As a Swiss Personalized Health Network (SPHN) driver project, our study will report on the characteristics and usability of the first nationwide eFI in Switzerland connecting all five Swiss University Hospitals’ Geriatric Departments with a representative sample of patients aged 65 years and older admitted to acute care.Ethics and disseminationThe study protocol was approved by the competent ethics committee of the Canton of Zurich (BASEC-ID 2019-00445). All acquired data will be handled according to SPHN’s ethical framework for responsible data processing in personalised health research. Analyses will be performed within the secure BioMedIT environment, a national infrastructure to enable secure biomedical data processing, an integral part of SPHN.Trial registration numberNCT04516642.
APA, Harvard, Vancouver, ISO, and other styles
22

Caskey, John, Iain L. McConnell, Madeline Oguss, Dmitriy Dligach, Rachel Kulikoff, Brittany Grogan, Crystal Gibson, et al. "Identifying COVID-19 Outbreaks From Contact-Tracing Interview Forms for Public Health Departments: Development of a Natural Language Processing Pipeline." JMIR Public Health and Surveillance 8, no. 3 (March 8, 2022): e36119. http://dx.doi.org/10.2196/36119.

Full text
Abstract:
Background In Wisconsin, COVID-19 case interview forms contain free-text fields that need to be mined to identify potential outbreaks for targeted policy making. We developed an automated pipeline to ingest the free text into a pretrained neural language model to identify businesses and facilities as outbreaks. Objective We aimed to examine the precision and recall of our natural language processing pipeline against existing outbreaks and potentially new clusters. Methods Data on cases of COVID-19 were extracted from the Wisconsin Electronic Disease Surveillance System (WEDSS) for Dane County between July 1, 2020, and June 30, 2021. Features from the case interview forms were fed into a Bidirectional Encoder Representations from Transformers (BERT) model that was fine-tuned for named entity recognition (NER). We also developed a novel location-mapping tool to provide addresses for relevant NER. Precision and recall were measured against manually verified outbreaks and valid addresses in WEDSS. Results There were 46,798 cases of COVID-19, with 4,183,273 total BERT tokens and 15,051 unique tokens. The recall and precision of the NER tool were 0.67 (95% CI 0.66-0.68) and 0.55 (95% CI 0.54-0.57), respectively. For the location-mapping tool, the recall and precision were 0.93 (95% CI 0.92-0.95) and 0.93 (95% CI 0.92-0.95), respectively. Across monthly intervals, the NER tool identified more potential clusters than were verified in WEDSS. Conclusions We developed a novel pipeline of tools that identified existing outbreaks and novel clusters with associated addresses. Our pipeline ingests data from a statewide database and may be deployed to assist local health departments for targeted interventions.
APA, Harvard, Vancouver, ISO, and other styles
23

Nowakowski, S., J. Razjouyan, A. D. Naik, R. Agrawal, K. Velamuri, S. Singh, and A. Sharafkhaneh. "1180 The Use Of Natural Language Processing To Extract Data From Psg Sleep Study Reports Using National Vha Electronic Medical Record Data." Sleep 43, Supplement_1 (April 2020): A450—A451. http://dx.doi.org/10.1093/sleep/zsaa056.1174.

Full text
Abstract:
Abstract Introduction In 2007, Congress asked the Department of Veteran Affairs to pay closer attention to the incidence of sleep disorders among veterans. We aimed to use natural language processing (NLP), a method that applies algorithms to understand the meaning and structure of sentences within Electronic Health Record (EHR) patient free-text notes, to identify the number of attended polysomnography (PSG) studies conducted in the Veterans Health Administration (VHA) and to evaluate the performance of NLP in extracting sleep data from the notes. Methods We identified 481,115 sleep studies using CPT code 95810 from 2000-19 in the national VHA. We used rule-based regular expression method (phrases: “sleep stage” and “arousal index”) to identify attended PSG reports in the patient free-text notes in the EHR, of which 69,847 records met the rule-based criteria. We randomly selected 178 notes to compare the accuracy of the algorithm in mining sleep parameters: total sleep time (TST), sleep efficiency (SE) and sleep onset latency (SOL) compared to human manual chart review. Results The number of documented PSG studies increased each year from 963 in 2000 to 14,209 in 2018. System performance of NLP compared to manually annotated reference standard in detecting sleep parameters was 83% for TST, 87% for SE, and 81% for SOL (accuracy benchmark ≥ 80%). Conclusion This study showed that NLP is a useful technique to mine EHR and extract data from patients’ free-text notes. Reasons that NLP is not 100% accurate included, the note authors used different phrasing (e.g., “recording duration”) which the NLP algorithm did not detect/extract or authors omitting sleep continuity variables from the notes. Nevertheless, this automated strategy to identify and extract sleep data can serve as an effective tool in large health care systems to be used for research and evaluation to improve sleep medicine patient care and outcomes. Support This material is based upon work supported in part by the Department of Veteran Affairs, Veterans Health Administration, Office of Research and Development, and the Center for Innovations in Quality, Effectiveness and Safety (CIN 13-413). Dr. Nowakowski is also supported by a National Institutes of Health (NIH) Grant (R01NR018342).
APA, Harvard, Vancouver, ISO, and other styles
24

Gimadeev, Sh M., A. I. Latypov, S. V. Radchenko, and D. F. Khaziakhmetov. "The effect of hospital information systems on healthcare facilities efficiency indicators." Kazan medical journal 96, no. 2 (April 15, 2015): 227–33. http://dx.doi.org/10.17750/kmj2015-227.

Full text
Abstract:
Aim. Comparative assessment of an automation facilities influence on labor input and business processes’ productivity indicators related to primary functions of healthcare facilities of different types.Methods. We performed medical personnel’s work timing in emergency rooms, as well as medical records timing in clinical departments. The automated electronic health records processing while operating hospital information systems created by authors among different types of healthcare facilities was also performed. Output data included personal health record operation periods values and system events timestamps.Results. The data concerning hospital information systems’ influence on electronic health records operating time changes and hospitalization delays was obtained. A correlation between the initial hospitalization delay and hospital capacity was discovered (r=0.917). The emergency room automation significantly reduces hospitalization delays. Under clinical information system operating conditions, the primary examination time recording increases twice, while the time spent for all other electronic health records decreases in higher order. Considerable difference between primary examination recording time and the time, necessary for other personal health record registrations, has satisfactory interpretation within the heterogeneous medical data sources integration model, but not within usability model. In general, the gained data does not confirm results of previously published researches pointing the increased time doctors spent for data management in automation conditions.Conclusion. Hospital information systems implementation improved the specialist’s labor productivity and main working processes work capacity. The obtained data indicate a greater influence of automation in large healthcare facilities and reject usability hypothesis of hospital information systems efficiency.
APA, Harvard, Vancouver, ISO, and other styles
25

Kuch, R., H. U. Prokosch, J. Dudeck, and T. Bürkle. "Stepwise Evaluation of Information Systems in an University Hospital." Methods of Information in Medicine 38, no. 01 (1999): 09–15. http://dx.doi.org/10.1055/s-0038-1634150.

Full text
Abstract:
Abstract:A prospective intervention study with historical control has been performed at Giessen University Hospital, Germany, to investigate the influence of electronic data processing systems on nurses’ working environment. Two wards of the medical department were selected for this study, using the combined approach of work-sampling methods and questionnaires. In the first intervention a central information system with restricted functions was introduced. For the second intervention an additional nursing information system was installed. The distribution of nurses’ worktime into the fields of general nursing care, specific nursing care and administrative activities was not influenced by electronic data processing. No time saving could be measured. Results of the questionnaires did, however, indicate a positive influence of the hospital information system on nurses’ working environment.
APA, Harvard, Vancouver, ISO, and other styles
26

Padirayon, Lourdes M., Melvin S. Atayan, Jose Sherief Panelo, and Carlito R. Fagela, Jr. "Mining the crime data using naïve Bayes model." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 2 (August 1, 2021): 1084. http://dx.doi.org/10.11591/ijeecs.v23.i2.pp1084-1092.

Full text
Abstract:
<p>A massive number of documents on crime has been handled by police departments worldwide and today's criminals are becoming technologically elegant. One obstacle faced by law enforcement is the complexity of processing voluminous crime data. Approximately 439 crimes have been registered in sanchez mira municipality in the past seven years. Police officers have no clear view as to the pattern crimes in the municipality, peak hours, months of the commission and the location where the crimes are concentrated. The naïve Bayes modelis a classification algorithm using the Rapid miner auto model which is used and analyze the crime data set. This approach helps to recognize crime trends and of which, most of the crimes committed were a violation of special penal laws. The month of May has the highest for index and non-index crimes and Tuesday as for the day of crimes. Hotspots were barangay centro 1 for non-index crimes and barangay centro 2 for index crimes. Most non-index crimes committed were violations of special law and for index crime rape recorded the highest crime and usually occurs at 2 o’clock in the afternoon. The crime outcome takes various decisions to maximize the efficacy of crime solutions.</p>
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Si, Shan Gu, and Haiying Liu. "Cloud Application in the Construction of English Virtual Teaching Resources Based on Digital Three-Dimensional Technology." Wireless Communications and Mobile Computing 2022 (March 19, 2022): 1–11. http://dx.doi.org/10.1155/2022/3725366.

Full text
Abstract:
In order to improve the sharing effect of English teaching resources, based on digital three-dimensional technology, this paper constructs a cloud application in English virtual teaching resource database system based on digital three-dimensional technology. Moreover, this paper uses signal recognition technology to digitally process English teaching resources and establish a dynamic and changeable three-tier indicator system. In addition, this paper collects and summarizes the state data generated in the teaching operation of various departments of teaching management in real time and conducts scientific analysis and processing on it according to the management objectives to obtain valuable data information. Finally, this paper digitally processes English teaching resources into system-recognizable text to realize platform resource sharing.
APA, Harvard, Vancouver, ISO, and other styles
28

Aliabadi, Ali, Abbas Sheikhtaheri, and Hossein Ansari. "Electronic health record–based disease surveillance systems: A systematic literature review on challenges and solutions." Journal of the American Medical Informatics Association 27, no. 12 (September 14, 2020): 1977–86. http://dx.doi.org/10.1093/jamia/ocaa186.

Full text
Abstract:
Abstract Objective Disease surveillance systems are expanding using electronic health records (EHRs). However, there are many challenges in this regard. In the present study, the solutions and challenges of implementing EHR-based disease surveillance systems (EHR-DS) have been reviewed. Materials and Methods We searched the related keywords in ProQuest, PubMed, Web of Science, Cochrane Library, Embase, and Scopus. Then, we assessed and selected articles using the inclusion and exclusion criteria and, finally, classified the identified solutions and challenges. Results Finally, 50 studies were included, and 52 unique solutions and 47 challenges were organized into 6 main themes (policy and regulatory, technical, management, standardization, financial, and data quality). The results indicate that due to the multifaceted nature of the challenges, the implementation of EHR-DS is not low cost and easy to implement and requires a variety of interventions. On the one hand, the most common challenges include the need to invest significant time and resources; the poor data quality in EHRs; difficulty in analyzing, cleaning, and accessing unstructured data; data privacy and security; and the lack of interoperability standards. On the other hand, the most common solutions are the use of natural language processing and machine learning algorithms for unstructured data; the use of appropriate technical solutions for data retrieval, extraction, identification, and visualization; the collaboration of health and clinical departments to access data; standardizing EHR content for public health; and using a unique health identifier for individuals. Conclusions EHR systems have an important role in modernizing disease surveillance systems. However, there are many problems and challenges facing the development and implementation of EHR-DS that need to be appropriately addressed.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Yang, Zhaoxiang Yu, and Hua Sun. "Treatment Effect of Type 2 Diabetes Patients in Outpatient Department Based on Blockchain Electronic Mobile Medical App." Journal of Healthcare Engineering 2021 (March 1, 2021): 1–12. http://dx.doi.org/10.1155/2021/6693810.

Full text
Abstract:
As the pace of people’s lives accelerates, there are more and more diabetic patients. This research mainly explores the treatment effect of type 2 diabetic patients based on blockchain electronic mobile medical app. Considering that it is more realistic to adopt an off-chain storage solution, the blockchain-based medical data sharing platform in this study adopts an off-chain storage solution. Only key information is stored in the blockchain network, and all medical data will be in the cloud space. For storage, cloud storage uses Aliyun’s OSS storage service, which can be expanded infinitely. The cloud operation module is responsible for all operations that interact with cloud storage. The chain code can call the cloud operation module to upload the user’s encrypted medical data and user ID to Alibaba Cloud’s OSS. The chain code will return the storage address of the medical data and the authorized access address is sent to the blockchain network for consensus on the chain. The message processing module provides information processing functions such as chat information processing, APP use reminders, and health tips. The indicator recording module includes indicator recording functions including 6 indicators of blood sugar, medication, diet, weight, exercise, and sleep. The main function of the indicator analysis module is to display the curve trends of the 6 indicators recorded by the patient in three days, one week, and one month. Comparing the change range of the mean value of glycosylated hemoglobin at the beginning and end of the two groups of patients, it can be found that the change range of glycosylated hemoglobin in the intervention group is −6.04%, while the change range of the control group is only −3.26%. The impact of the mobile medical app designed in this study will indeed be reflected in the patient’s blood sugar control and help patients to better control blood sugar.
APA, Harvard, Vancouver, ISO, and other styles
30

Imai, Randy. "Electronic Wildlife Recovery Tool." International Oil Spill Conference Proceedings 2017, no. 1 (May 1, 2017): 914–23. http://dx.doi.org/10.7901/2169-3358-2017.1.914.

Full text
Abstract:
ABSTRACT Oil spills can have significant impact on wildlife. Documenting the spatial and temporal data associated with oil spills is an important component that aids in all phases of the response. After struggling long hours to incorporate hardcopy records into a Geographic Information System (GIS), the California Department of Fish and Wildlife, Office of Spill Prevention and Response (OSPR) recognized the importance of developing a wildlife recovery application specifically designed for the Wildlife Branch within the Incident Command System (ICS). The Wildlife Recovery Application (WRA) is an iOS based program designed to work optimally on an iPhone. The objective of the application was to keep it simple intuitive, reliable, and effective. The WRA can be used with minimal training and has the ability to operate in environments without cellular service. The interface permits the user to visually review the data and photographs, allowing the user to electronically transmit the information to the GIS Unit remotely once cell service or wireless internet has been established. Once the data is transmitted to the Incident Command Post (ICP), the information can be quickly integrated into a GIS. This eliminates the difficult task of manually inputting data from handwritten field notes that may have been compromised by the environmental elements or illegible due to variations in handwriting styles or penmanship. Lastly, the Care and Processing Group within the Wildlife Branch can integrate the data into an on-line medical database designed specifically for wildlife rehabilitators to collect, manage and analyze data for their individual wildlife patients.
APA, Harvard, Vancouver, ISO, and other styles
31

Tebbe, B., U. Mansmann, U. Wollina, P. Auer-Grumbach, A. Licht-Mbalyohere, M. Arensmeier, and CE Orfanos. "Markers in cutaneous lupus erythematosus indicating systemic involvement. A multicenter study on 296 patients." Acta Dermato-Venereologica 77, no. 4 (July 1, 1997): 305–8. http://dx.doi.org/10.2340/0001555577305308.

Full text
Abstract:
Lupus erythematosus (LE) is an autoimmune disorder, involving the skin and/or other internal organs. As cutaneous variants, chronic discoid LE (CDLE) and subacute cutaneous LE (SCLE) usually have a better prognosis, however, involvement of internal organs with transition into systemic disease may occur. The aim of this study was to assess the significance of some clinical and laboratory criteria that could serve as markers for early recognition of systemic involvement in cutaneous LE. Three hundred and seventy-nine patients with LE, seen in five cooperating Departments of Dermatology during the years 1989-1994, were documented by electronic data processing according to a common protocol. Two hundred and forty-five of these patients had cutaneous LE (CDLE or SCLE), and 51 had systemic LE (SLE) and were included in this study. Forty-nine patients with either CDLE/SCLE or SLE were not evaluated because of incomplete documentation
APA, Harvard, Vancouver, ISO, and other styles
32

Chenais, Gabrielle, Cédric Gil-Jardiné, Hélène Touchais, Marta Avalos Fernandez, Benjamin Contrand, Eric Tellier, Xavier Combes, Loick Bourdois, Philippe Revel, and Emmanuel Lagarde. "Deep Learning Transformer Models for Building a Comprehensive and Real-time Trauma Observatory: Development and Validation Study." JMIR AI 2 (January 12, 2023): e40843. http://dx.doi.org/10.2196/40843.

Full text
Abstract:
Background Public health surveillance relies on the collection of data, often in near-real time. Recent advances in natural language processing make it possible to envisage an automated system for extracting information from electronic health records. Objective To study the feasibility of setting up a national trauma observatory in France, we compared the performance of several automatic language processing methods in a multiclass classification task of unstructured clinical notes. Methods A total of 69,110 free-text clinical notes related to visits to the emergency departments of the University Hospital of Bordeaux, France, between 2012 and 2019 were manually annotated. Among these clinical notes, 32.5% (22,481/69,110) were traumas. We trained 4 transformer models (deep learning models that encompass attention mechanism) and compared them with the term frequency–inverse document frequency associated with the support vector machine method. Results The transformer models consistently performed better than the term frequency–inverse document frequency and a support vector machine. Among the transformers, the GPTanam model pretrained with a French corpus with an additional autosupervised learning step on 306,368 unlabeled clinical notes showed the best performance with a micro F1-score of 0.969. Conclusions The transformers proved efficient at the multiclass classification of narrative and medical data. Further steps for improvement should focus on the expansion of abbreviations and multioutput multiclass classification.
APA, Harvard, Vancouver, ISO, and other styles
33

Oshima, Isaac, Seth Paine, and Judd Muskat. "SCAT Mobile Data Collection - California's Ongoing Experience." International Oil Spill Conference Proceedings 2017, no. 1 (May 1, 2017): 2017213. http://dx.doi.org/10.7901/2169-3358-2017.1.000213.

Full text
Abstract:
The California Department of Fish and Wildlife, Office of Spill Prevention and Response (OSPR) since its inception in 1991 has been collecting SCAT (Shoreline Cleanup Assessment Technique) data. OSPR, collects SCAT data based upon the SCAT forms developed by the National Oceanic and Atmospheric Administration (NOAA). Processing SCAT paper forms is a time consuming process which includes manual transcription which can be prone to error. Beginning in 2009, OSPR began using an electronic data collection application from EPDS called, Pocket SCAT®. OSPR had scheduled exercises and our last maintenance and upgrade purchase was in 2011. Despite this, when the Refugio Spill occurred on May 19th 2015; because of ongoing issues with Pocket SCAT® and equipment, the decision was made to revert back to using paper forms. SCAT data collection and processing, once again was a time consuming process and error prone. OSPR's experience with SCAT paper forms, Pocket SCAT®, Refugio and other spills, and its recent in-house development and completion of an iOS Wildlife Recovery app culminated into the decision to create its own iOS SCAT app called, “SCATalogue”. This poster presents SCATalogue's current status, features, design considerations, inherent technology limitations and their mitigations, and envisioned future revisions. Presented herein, also a diagram of SCATalogue's place in the greater SCAT workflow such as the backend database, integration with the Common Operational Picture (COP), and other applications and technologies used to facilitate integration.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Liuqing, Jack Geissinger, William A. Ingram, and Edward A. Fox. "Teaching Natural Language Processing through Big Data Text Summarization with Problem-Based Learning." Data and Information Management 4, no. 1 (March 24, 2020): 18–43. http://dx.doi.org/10.2478/dim-2020-0003.

Full text
Abstract:
AbstractNatural language processing (NLP) covers a large number of topics and tasks related to data and information management, leading to a complex and challenging teaching process. Meanwhile, problem-based learning is a teaching technique specifically designed to motivate students to learn efficiently, work collaboratively, and communicate effectively. With this aim, we developed a problem-based learning course for both undergraduate and graduate students to teach NLP. We provided student teams with big data sets, basic guidelines, cloud computing resources, and other aids to help different teams in summarizing two types of big collections: Web pages related to events, and electronic theses and dissertations (ETDs). Student teams then deployed different libraries, tools, methods, and algorithms to solve the task of big data text summarization. Summarization is an ideal problem to address learning NLP since it involves all levels of linguistics, as well as many of the tools and techniques used by NLP practitioners. The evaluation results showed that all teams generated coherent and readable summaries. Many summaries were of high quality and accurately described their corresponding events or ETD chapters, and the teams produced them along with NLP pipelines in a single semester. Further, both undergraduate and graduate students gave statistically significant positive feedback, relative to other courses in the Department of Computer Science. Accordingly, we encourage educators in the data and information management field to use our approach or similar methods in their teaching and hope that other researchers will also use our data sets and synergistic solutions to approach the new and challenging tasks we addressed.
APA, Harvard, Vancouver, ISO, and other styles
35

Al-Shahir, Ali Abdul Fatah. "Employ information technology capabilities in building a data warehouse Organization." Journal of University of Human Development 2, no. 1 (January 31, 2016): 273. http://dx.doi.org/10.21928/juhd.v2n1y2016.pp273-295.

Full text
Abstract:
The researcher seeks in their first step of their paper to find on concept of information technology capabilities and statement types. After exhibiting the views of the writers and the researchers concerning about them. As a second step, they present their proceeding concept for information technology capabilities. The third step seeks to show the concept of a data warehouse and characteristics of the organization, and in the light of the views of writers and researchers as well, And ended committed to the concept of procedural to the data warehouse, as well as the data warehouse architecture and data modeling process it. The fourth step was to stand up to the reality of information technology capabilities in the company surveyed using the checklist as a tool to collect data and information. As well as the proposal of a model for building a data warehouse in the Home Furniture Company. The researcher reached to a number of conclusions, mainly the possibility of building a data warehouse in the company surveyed, because it will help in achieving client satisfaction through low times to provide information after they take a long time, due to the system's ability to store data and information type and quantity and in an orderly fashion, as well as non- a redundancy in data collection and entry and processing. In light of this, the researcher presented his proposals that demonstrated the most important expand reliance on electronic systems, and address weaknesses in the IT infrastructure, as well as the need to conduct new research in other organizations to draw the attention of the departments of the importance of the data warehouse as that can contribute to the success of their organizations.
APA, Harvard, Vancouver, ISO, and other styles
36

Gorbanev, S. A., A. N. Kulichenko, Vladimir N. Fedorov, V. M. Dubyansky, Yu A. Novikova, A. A. Kovshov, N. A. Tikhonova, and O. H. Shayahmetov. "ORGANIZATION OF AN INTERREGIONAL MONITORING SYSTEM USING GIS TECHNOLOGIES BY THE EXAMPLE OF RUSSIAN FEDERATION ARCTIC ZONE." Hygiene and sanitation 97, no. 12 (December 15, 2018): 1133–40. http://dx.doi.org/10.18821/0016-9900-2018-97-12-1133-1140.

Full text
Abstract:
Development of social and hygienic monitoring (SHM) system as a means of ensuring sanitary-epidemiological wellbeing of the population of Russian Federation is one of the main activities of the Federal Service for Supervision in Protection of the Rights of Consumer and Man Wellbeing. The authors analyzed the current state of organization of SHM: lists of control points of human-environment factors and laboratory test findings; information collection techniques and systematization; procedural approaches to automation of data collection, processing and visualization.SHM is legally assigned to control measures, during which the interaction of state and municipal control bodies with legal entities and individual entrepreneurs is not required. The further SHM development is reported to be restrained by a number of organizational -technologic and financial- economic problems: analysis of sanitary-epidemiological wellbeing state among population has shown that according to SHM, interregional aspects are not taken into account; procedural approaches to the choice of control points and formation of environmental pollution indices lists, as well as the assessment procedure of social and economic efficiency of SHM, have not been properly worked out; the number of departments and experts specialized in SHM and responsible for it is decreasing, and etc. A model of interregional social-hygienic monitoring as a way of SHM improvement is suggested. Its aims include the quality increase of expert-and-analytical SHM data processing within the entire RF Arctic zone; consideration of factors affecting population health and having interregional character. Departments and Federal State Healthcare agencies named “Centers of Hygiene and Epidemiology” in various subjects of the Russian Arctic and the Federal Service for Supervision in Protection of the Rights of Consumer and Man Wellbeing research institutions will participate in the interregional SHM. A concept of GIS portal of the Russian Arctic based on geo-information system and aimed to improve SHM is developed. It can be a comprehensive electronic database of human-environment factors and population health state, as well as an effective instrument with spatial analysis function for the assessment of sanitary and epidemiological wellbeing of the population.
APA, Harvard, Vancouver, ISO, and other styles
37

Ayre, Karyn, Andre Bittar, Rina Dutta, Somain Verma, and Joyce Kam. "Identifying perinatal self-harm in electronic healthcare records using natural language processing." BJPsych Open 7, S1 (June 2021): S4—S5. http://dx.doi.org/10.1192/bjo.2021.74.

Full text
Abstract:
Aims1.To generate a Natural Language Processing (NLP) application that can identify mentions of perinatal self-harm among electronic healthcare records (EHRs)2.To use this application to estimate the prevalence of perinatal self-harm within a data-linkage cohort of women accessing secondary mental healthcare during the perinatal period.MethodData source: the Clinical Record Interactive Search system. This is a database of de-identified EHRs of secondary mental healthcare service-users at South London and Maudsley NHS Foundation Trust (SLaM). CRIS has pre-existing ethical approval via the Oxfordshire Research Ethics Committee C (ref 18/SC/0372) and this project was approved by the CRIS Oversight Committee (16-069). After developing a list of synonyms for self-harm and piloting coding rules, a gold standard dataset of EHRs was manually coded using Extensible Human Oracle Suite of Tools (eHOST) software. An NLP application to detect perinatal self-harm was then developed using several layers of linguistic processing based on the spaCy NLP library for Python. Evaluation of mention-level performance was done according to the attributes of mentions the application was designed to identify (span, status, temporality and polarity), by comparing application performance against the gold standard dataset. Performance was described as precision, recall, F-score and Cohen's kappa. Most service-users had more than one EHR in their period of perinatal service use. Performance was therefore also measured at “service-user level” with additional performance metrics of likelihood ratios and post-test probabilities. Linkage with the Hospital Episode Statistics datacase allowed creation of a cohort of women who accessed SLaM during the perinatal period. By deploying the application on the EHRs of the women in the cohort, we were able to estimate the prevalence of perinatal self-harm.ResultMention-level performance: micro-averaged F-score, precision and recall for span, polarity and temporality all >0.8. Kappa for status 0.68, temporality 0.62, polarity 0.91. Service-user level performance: F-score, precision, recall all 0.69, overall F-score 0.81, positive likelihood ratio 9.4 (4.8–19), post-test probability 68.9% (95%CI 53–82).Cohort prevalence of self-harm in pregnancy was 15.3% (95% CI 14.3–16.3); self-harm in the postnatal year was 19.7% (95% CI 18.6–20.8). Only a very small proportion of women self-harmed in both pregnancy and the postnatal year (3.9%, 95% CI 3.3–4.4).ConclusionNLP can be used to identify perinatal self-harm within EHRs. The hardest attribute to classify was temporality. This is in line with the wider literature indicating temporality as a notoriously difficult problem in NLP. As a result, the application probably over-estimates prevalence, to a degree. However, overall performance, given the difficulty of the task, is good.Bearing in mind the limitations, our findings suggest that self-harm is likely to be relatively common in women accessing secondary mental healthcare during the perinatal period.Funding: KA is funded by a National Institute for Health Research Doctoral Research Fellowship (NIHR-DRF-2016-09-042). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. RD is funded by a Clinician Scientist Fellowship (research project e-HOST-IT) from the Health Foundation in partnership with the Academy of Medical Sciences which also party funds AB. AB's work was also part supported by Health Data Research UK, an initiative funded by UK Research and Innovation, Department of Health and Social Care (England) and the devolved administrations, and leading medical research charities, as well as the Maudsley Charity.Acknowledgements: Professor Louise M Howard, who originally suggested using NLP to identify perinatal self-harm in EHRs. Professor Howard is the primary supervisor of KA's Fellowship.
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Ning. "Financial Big Data Management and Control and Artificial Intelligence Analysis Method Based on Data Mining Technology." Wireless Communications and Mobile Computing 2022 (May 29, 2022): 1–13. http://dx.doi.org/10.1155/2022/7596094.

Full text
Abstract:
Driven by capital and Internet information (IT) technology, the operating scale and capital scale of modern industrial and commercial enterprises and various organizations have increased exponentially. At present, the manual-based financial work model has been unable to adapt to the changing speed of the modern business environment and the business rhythm of enterprises. All kinds of enterprises and organizations, especially large enterprises, urgently need to improve the operational efficiency of financial systems. By enhancing the integrity, timeliness, and synergy of financial information, it improves the comprehensiveness and ability of analyzing complex problems in financial analysis. It can cope with such rapid changes and help improve the financial management capabilities of enterprises. It provides more valuable decision-making guidance for business operations and reduces business risks. In recent years, the vigorous development of artificial intelligence technology has provided a feasible solution to meet the urgent needs of enterprises. Combining data mining, deep learning, image recognition, natural language processing, knowledge graph, human-computer interaction, intelligent decision-making, and other artificial intelligence technologies with IT technology to transform financial processes, it can significantly reduce the processing time of repetitive basic financial processes, reduce the dependence on manual accounting processing, and improve the work efficiency of the financial department. Through the autonomous analysis and decision-making of artificial intelligence, the intelligentization of financial management is realized, and more accurate and effective financial decision-making support is provided for enterprises. This paper studies the company’s intelligent financial reengineering process, so as to provide reference and reference for other enterprises to upgrade similar financial systems. The results of the analysis showed that at the level of α = 0.05 , there was a significant difference in the mean between the two populations. When the r value is in the range of -1 and 1, the linear relationship between the x and y variables is more obvious. This paper proposes decision-making suggestions and risk control early warning to the group decision-making body, or evaluates the financial impact of the group’s decision-making, and opens the road to financial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
39

Stewart, Jonathon, Juan Lu, Adrian Goudie, Glenn Arendts, Shiv Akarsh Meka, Sam Freeman, Katie Walker, et al. "Applications of natural language processing at emergency department triage: A narrative review." PLOS ONE 18, no. 12 (December 14, 2023): e0279953. http://dx.doi.org/10.1371/journal.pone.0279953.

Full text
Abstract:
Introduction Natural language processing (NLP) uses various computational methods to analyse and understand human language, and has been applied to data acquired at Emergency Department (ED) triage to predict various outcomes. The objective of this scoping review is to evaluate how NLP has been applied to data acquired at ED triage, assess if NLP based models outperform humans or current risk stratification techniques when predicting outcomes, and assess if incorporating free-text improve predictive performance of models when compared to predictive models that use only structured data. Methods All English language peer-reviewed research that applied an NLP technique to free-text obtained at ED triage was eligible for inclusion. We excluded studies focusing solely on disease surveillance, and studies that used information obtained after triage. We searched the electronic databases MEDLINE, Embase, Cochrane Database of Systematic Reviews, Web of Science, and Scopus for medical subject headings and text keywords related to NLP and triage. Databases were last searched on 01/01/2022. Risk of bias in studies was assessed using the Prediction model Risk of Bias Assessment Tool (PROBAST). Due to the high level of heterogeneity between studies and high risk of bias, a metanalysis was not conducted. Instead, a narrative synthesis is provided. Results In total, 3730 studies were screened, and 20 studies were included. The population size varied greatly between studies ranging from 1.8 million patients to 598 triage notes. The most common outcomes assessed were prediction of triage score, prediction of admission, and prediction of critical illness. NLP models achieved high accuracy in predicting need for admission, triage score, critical illness, and mapping free-text chief complaints to structured fields. Incorporating both structured data and free-text data improved results when compared to models that used only structured data. However, the majority of studies (80%) were assessed to have a high risk of bias, and only one study reported the deployment of an NLP model into clinical practice. Conclusion Unstructured free-text triage notes have been used by NLP models to predict clinically relevant outcomes. However, the majority of studies have a high risk of bias, most research is retrospective, and there are few examples of implementation into clinical practice. Future work is needed to prospectively assess if applying NLP to data acquired at ED triage improves ED outcomes when compared to usual clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Cheng, Chenlong Yao, Pengfei Chen, Jiamin Shi, Zhe Gu, and Zheying Zhou. "Artificial Intelligence Algorithm with ICD Coding Technology Guided by the Embedded Electronic Medical Record System in Medical Record Information Management." Journal of Healthcare Engineering 2021 (August 30, 2021): 1–9. http://dx.doi.org/10.1155/2021/3293457.

Full text
Abstract:
The study aims to explore the application of international classification of diseases (ICD) coding technology and embedded electronic medical record (EMR) system. The study established an EMR information knowledge system and collected the data of patient medical records and disease diagnostic codes on the front pages of 8 clinical departments of endocrinology, oncology, obstetrics and gynecology, ophthalmology, orthopedics, neurosurgery, and cardiovascular medicine for statistical analysis. Natural language processing-bidirectional recurrent neural network (NLP-BIRNN) algorithm was used to optimize medical records. The results showed that the coder was not clear about the basic rules of main diagnosis selection and the classification of disease coding and did not code according to the main diagnosis principles. The disease was not coded according to different conditions or specific classification, the code of postoperative complications was inaccurate, the disease diagnosis was incomplete, and the code selection was too general. The solutions adopted were as follows: communication and knowledge training should be strengthened for coders and medical personnel. BIRNN was compared with the convolutional neural network (CNN) and recurrent neural network (RNN) in accuracy, symptom accuracy, and symptom recall, and it suggested that the proposed BIRNN has higher value. Pathological language reading under artificial intelligence algorithm provides some convenience for disease diagnosis and treatment.
APA, Harvard, Vancouver, ISO, and other styles
41

Grachev, Vladimir I. "In Memory of Nikolay N. Zalogin." Radioelectronics. Nanosystems. Information Technologies. 15, no. 2 (June 29, 2023): 201–2. http://dx.doi.org/10.17725/rensit.2023.15.201.

Full text
Abstract:
Information is presented about the deceased Nikolai Nikolayevich Zalogin, Ph.D., Leading Researcher in Laboratory of Physical Fundamentals of Nanocomposite Materials for Information Technology, Department of Physical Fundamentals of Nanoelectronics, Kotelnikov Institute of Radioengineering and Electronics of Russian Academy of Sciences, a well-known specialist in the field of generation of microwave noise oscillations, their application in electronic warfare, radar and processing of broadband signals based on dynamic chaos, Laureate of the USSR State Prize and two awards of the USSR Council of Ministers: basic biographical data, training at the Moscow Physical and Technical Institute, work in the Kotelnikov IRE (Moscow), PhD thesis defense, authorship of more than 120 scientific papers in scientific journals and one monograph, participation in Russian and international conferences and seminars, participation in field work and testing of developed systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhai, Junhao, and Yunqiu Shi. "Design of Labor Education System in Primary and Secondary Schools Based on Big Data." Frontiers in Computing and Intelligent Systems 1, no. 3 (October 25, 2022): 48–53. http://dx.doi.org/10.54097/fcis.v1i3.2069.

Full text
Abstract:
This project discloses a primary and secondary school labor education system, method, electronic equipment and storage medium based on Internet information technology. The system specifically includes: platform management unit, base/institution management unit, school management unit, education department management unit, student parent’s management unit and data processing unit. The education methods, electronic equipment and storage medium correspond to the education system. This project is based on Internet information technology and big data technology to carry out labor education for primary and secondary school students. The business interaction and data of the main links in the process of labor education are circulated and promoted in the system, so that the school can master the complete process data. The whole process data system will automatically classify and summarize the students as the main body, and finally form students' labor education files. At the same time, through the construction of the platform, the problems of unbalanced educational resources and low sharing degree of teaching resources such as high-quality lesson plans and labor education bases can be solved, and the sharing of high-quality educational resources can be realized.
APA, Harvard, Vancouver, ISO, and other styles
43

Altamimi, Mohammed. "Big Data in E-government: Classification and Prediction using Machine Learning Algorithms." Iraqi Journal of Intelligent Computing and Informatics (IJICI) 1, no. 2 (October 14, 2022): 41–55. http://dx.doi.org/10.52940/ijici.v1i2.11.

Full text
Abstract:
Many countries have used big data to develop their institutions, such as Estonia in policing, India in health care, and the development of agriculture in the United Kingdom, etc. Data is very important as it is no longer oil that is the most valuable resource in the world, but data. This research examines ways to develop the Iraqi state institutions by using the big data of one of its institutions (electronic civil registry) (ECR) using the mining and analysis of this data. The pre-processing and analysis of this data are carried out depending on the needs of each institution and then using Machine Learning (ML) techniques. Its use has shown remarkable results in many areas, especially in data analysis, classification and forecasting. We applied five ML algorithms that are Support Vector Machine (SVM), Decision Tree (DT), K-Nearest Neighbor (KNN), Random Forest (RF), and Naive Bayes (NB) carried out in the Orange Data Mining tool. According to the simulation results of the proposed system, the accuracy of the classifications was around 100%, 99%, and 100% for the military department by the SVM classifier, the social welfare department by the RF classifier, and the statistics-planning department by the SVM classifier, respectively.
APA, Harvard, Vancouver, ISO, and other styles
44

Sobańska, Alicja. "Oddział Opracowywania Multimediów Biblioteki Raczyńskich w Poznaniu – struktura, zadania, baza programowa." Przegląd Archiwalno-Historyczny 4 (2017): 185–204. http://dx.doi.org/10.4467/2391-890xpah.17.011.14914.

Full text
Abstract:
W tekście przedstawiono Oddział Opracowania Multimediów poznańskiej Biblioteki Raczyńskich z uwzględnieniem kilku parametrów: postępowania z przynależnymi zbiorami na nośnikach cyfrowych (audiobooki, filmy, muzyka, inne dokumenty elektroniczne), wykorzystywanego systemu bibliotecznego (Horizon), stosowanego formatu metadanych (MARC 21), będącego w użytku języka haseł (JHP BN) na tle języka KABA i deskryptorów Biblioteki Narodowej. The Department of Multimedia Processing at the Raczyński Library in Poznań — structure, tasks, and agenda The article discusses the Department of Multimedia Processing at the Raczyński Library in Poznań, highlighting several elements of its activity: management of the Library’s collection in digital formats (audio books, films, music, other electronic documents), use of the library system (Horizon), meta data format (MARC 21), the keyword language used at the National Library (JHB BN) compared with the KABA language and descriptors of the National Library.
APA, Harvard, Vancouver, ISO, and other styles
45

Peterson, Kelly S., Alec B. Chapman, Wathsala Widanagamaachchi, Jesse Sutton, Brennan Ochoa, Barbara E. Jones, Vanessa Stevens, David C. Classen, and Makoto M. Jones. "Automating detection of diagnostic error of infectious diseases using machine learning." PLOS Digital Health 3, no. 6 (June 7, 2024): e0000528. http://dx.doi.org/10.1371/journal.pdig.0000528.

Full text
Abstract:
Diagnostic error, a cause of substantial morbidity and mortality, is largely discovered and evaluated through self-report and manual review, which is costly and not suitable to real-time intervention. Opportunities exist to leverage electronic health record data for automated detection of potential misdiagnosis, executed at scale and generalized across diseases. We propose a novel automated approach to identifying diagnostic divergence considering both diagnosis and risk of mortality. Our objective was to identify cases of emergency department infectious disease misdiagnoses by measuring the deviation between predicted diagnosis and documented diagnosis, weighted by mortality. Two machine learning models were trained for prediction of infectious disease and mortality using the first 24h of data. Charts were manually reviewed by clinicians to determine whether there could have been a more correct or timely diagnosis. The proposed approach was validated against manual reviews and compared using the Spearman rank correlation. We analyzed 6.5 million ED visits and over 700 million associated clinical features from over one hundred emergency departments. The testing set performances of the infectious disease (Macro F1 = 86.7, AUROC 90.6 to 94.7) and mortality model (Macro F1 = 97.6, AUROC 89.1 to 89.1) were in expected ranges. Human reviews and the proposed automated metric demonstrated positive correlations ranging from 0.231 to 0.358. The proposed approach for diagnostic deviation shows promise as a potential tool for clinicians to find diagnostic errors. Given the vast number of clinical features used in this analysis, further improvements likely need to either take greater account of data structure (what occurs before when) or involve natural language processing. Further work is needed to explain the potential reasons for divergence and to refine and validate the approach for implementation in real-world settings.
APA, Harvard, Vancouver, ISO, and other styles
46

Dafauti, Balam Singh. "E-Era of Jurisdiction:Empowering Traditional Courts Using Various Artificial Intelligence Tools." Asian Journal of Computer Science and Technology 7, no. 2 (August 5, 2018): 57–61. http://dx.doi.org/10.51983/ajcst-2018.7.2.1872.

Full text
Abstract:
In Indian scenario, we are still in the transformation phase from manual to electronic data processing. We are in balanced combination of simple, moral, responsive and transparent governance and IT tools and techniques. However a lot of scope is still there to do more and to imply IT in various governmental departments and domains. In the same sequence we can use artificial intelligence along with cloud computing to improve Indian Judicial system. Or we can say that the concept of e-courts can be enhanced by implying AI tools and techniques. The judiciary is in the early stages of a transformation in which AI (Artificial Intelligence) technology will help to make the judicial process faster, cheaper, and more predictable without compromising the integrity of judges’ discretionary reasoning. In this paper I have proposed a solution where judicial system with AI contributes to a process that encompasses such a wide range of knowledge, judgment, and experience. It have two more practical goals: producing tools to support judicial activities, including programs for intelligent document assembly, case retrieval, and support for discretionary decision-making; and developing new analytical tools for understanding and modeling the judicial process.
APA, Harvard, Vancouver, ISO, and other styles
47

Alba, Patrick R., Anthony Gao, Kyung Min Lee, Tori Anglin-Foote, Brian Robison, Evangelia Katsoulakis, Brent S. Rose, et al. "Ascertainment of Veterans With Metastatic Prostate Cancer in Electronic Health Records: Demonstrating the Case for Natural Language Processing." JCO Clinical Cancer Informatics, no. 5 (September 2021): 1005–14. http://dx.doi.org/10.1200/cci.21.00030.

Full text
Abstract:
PURPOSE Prostate cancer (PCa) is among the leading causes of cancer deaths. While localized PCa has a 5-year survival rate approaching 100%, this rate drops to 31% for metastatic prostate cancer (mPCa). Thus, timely identification of mPCa is a crucial step toward measuring and improving access to innovations that reduce PCa mortality. Yet, methods to identify patients diagnosed with mPCa remain elusive. Cancer registries provide detailed data at diagnosis but are not updated throughout treatment. This study reports on the development and validation of a natural language processing (NLP) algorithm deployed on oncology, urology, and radiology clinical notes to identify patients with a diagnosis or history of mPCa in the Department of Veterans Affairs. PATIENTS AND METHODS Using a broad set of diagnosis and histology codes, the Veterans Affairs Corporate Data Warehouse was queried to identify all Veterans with PCa. An NLP algorithm was developed to identify patients with any history or progression of mPCa. The NLP algorithm was prototyped and developed iteratively using patient notes, grouped into development, training, and validation subsets. RESULTS A total of 1,144,610 Veterans were diagnosed with PCa between January 2000 and October 2020, among which 76,082 (6.6%) were identified by NLP as having mPCa at some point during their care. The NLP system performed with a specificity of 0.979 and sensitivity of 0.919. CONCLUSION Clinical documentation of mPCa is highly reliable. NLP can be leveraged to improve PCa data. When compared to other methods, NLP identified a significantly greater number of patients. NLP can be used to augment cancer registry data, facilitate research inquiries, and identify patients who may benefit from innovations in mPCa treatment.
APA, Harvard, Vancouver, ISO, and other styles
48

Solovova, Natalia V., Yulia N. Gorbunova, and Olga Yu Kalmykova. "Digitalized continuous learning system in the company: organizational aspect." Vestnik of Samara University. Economics and Management 12, no. 4 (December 30, 2021): 145–56. http://dx.doi.org/10.18287/2542-0461-2021-12-4-145-156.

Full text
Abstract:
Information and technical support of the personnel management system is gaining more and more distribution, informatization is carried out in virtually all modern companies without exception. As a rule, in most companies, they primarily automate the functions of personnel records management and administration. These areas are associated with the storage and processing of information about employees: personal files, work books, employment contracts, data on the movement of personnel, payroll. Information technologies can be used not only for processing large amounts of data, but also for their analysis.Recently, the use of artificial intelligence in recruiting has become widespread; such technologies significantly reduce the time spent on the selection of candidates. Artificial intelligence is used at the initial stage of recruiting calling candidates, which allows the HR manager not to miss a promising specialist from the stream of vacancies. At the same time, there is a trend towards automating more complex HR functions such as employee assessment, development and training. In this study, the sought-after directions for improving the information and technical support of the personnel management service were highlighted: artificial intelligence, continuous learning, big data. The hierarchy analysis method based on expert judgment determined the criteria that guided the group of experts when choosing the most promising direction with growth potential. According to the results of the analysis, the largest number of points was received by the direction continuous learning in the format of electronic distance learning 56.1 %. Further, a technological scheme for information and technical support of the personnel management service was developed. The process of introducing distance learning in the company is described, the main items of expenses, the departments responsible for the implementation of this system, the preparation of the necessary regulatory documentation, the choice of software, the costs of administration of the system are given.
APA, Harvard, Vancouver, ISO, and other styles
49

Ou, Dongxiu, Yuqing Ji, Lei Zhang, and Hu Liu. "An Online Classification Method for Fault Diagnosis of Railway Turnouts." Sensors 20, no. 16 (August 17, 2020): 4627. http://dx.doi.org/10.3390/s20164627.

Full text
Abstract:
Railway turnout system is a key infrastructure to railway safety and efficiency. However, it is prone to failure in the field. Therefore, many railway departments have adopted a monitoring system to monitor the operation status of turnouts. With monitoring data collected, many researchers have proposed different fault-diagnosis methods. However, many of the existing methods cannot realize real-time updating or deal with new fault types. This paper—based on imbalanced data—proposes a Bayes-based online turnout fault-diagnosis method, which realizes incremental learning and scalable fault recognition. First, the basic conceptions of the turnout system are introduced. Next, the feature extraction and processing of the imbalanced monitoring data are introduced. Then, an online diagnosis method based on Bayesian incremental learning and scalable fault recognition is proposed, followed by the experiment with filed data from Guangzhou Railway. The results show that the scalable fault-recognition method can reach an accuracy of 99.11%, and the training time of the Bayesian incremental learning model reduces 29.97% without decreasing the accuracy, which demonstrates the high accuracy, adaptability and efficiency of the proposed model, of great significance for labor-saving, timely maintenance and further, safety and efficiency of railway transportation.
APA, Harvard, Vancouver, ISO, and other styles
50

Coulibaly, Moussa, Ahmed Errami, Sofia Belkhala, and Hicham Medromi. "A Live Smart Parking Demonstrator: Architecture, Data Flows, and Deployment." Energies 14, no. 7 (March 25, 2021): 1827. http://dx.doi.org/10.3390/en14071827.

Full text
Abstract:
Smart Parking is essential for any future smart cities due to the tremendous growth of the car fleet. Such infrastructures require a certain amount of equipment. Indeed, smart parking integrates a lot of actors, to manage the parking its equipment must be managed accordingly. Here, is proposed a distributed architecture to manage them by collecting efficiently their data. Two types of data relating to the parking must be collected: those coming from the deployed equipment in the parking and those coming from the internet due to remote users. Thus, a system of two main servers based on the multi-agent concept is proposed. This system manages the parking platform. The first server is dedicated to the parking equipment data collection (Processing Server–PS). The second server (Processing Web Server–PWS) collects the users’ online data such as reservation, and it is responsible for pricing policies, and receive post-processed data from the Processing Server. The parking equipment integrates a lot of commercial solutions, an intelligent multi-platform application based on this two server philosophy is developed and can be used for parking operation by users and parking managers. The flowcharts of the agents from the two mains servers are presented. These flowcharts are currently used in our demonstrator and still under improvements. Here, we present the architecture (hardware and software) of our smart parking demonstrator developed by our department and suitable for the experimentation of our future work related to this hot topic.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography