Статті в журналах з теми "Informatica (Data processing, Computer science)"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Informatica (Data processing, Computer science).

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Informatica (Data processing, Computer science)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kryshchuk, Mikola, Juris Lavendels, and Vjaceslavs Sitikovs. "Models of data and their processing for introductory courses of computer science." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 3 (June 16, 2015): 134. http://dx.doi.org/10.17770/etr2015vol3.178.

Повний текст джерела
Анотація:
<p class="R-AbstractKeywords">Evolution of secondary school course on Informatics during 1990s is discussed. This evolution gave for secondary school graduates rather high level knowledge of applications usage. On the other hand Informatics became purely pragmatic matter within generally academic nature of secondary school curricula. As the result graduates of secondary schools are not prepared enough for mastering of university’s course in Computer Science. Authors are not aware of serious studies and methods of prevention of such negative habits caused by evolution of Informatics course. In this article one method applied in university’s introduction course of Computer Science is considered. The method provides use of algorithmic system closed to the human mind, and to some extent compensates topics removed from the Informatics course in secondary school.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Stacewicz, Paweł. "From Computer Science to the Informational Worldview. Philosophical Interpretations of Some Computer Science Concepts." Foundations of Computing and Decision Sciences 44, no. 1 (March 1, 2019): 27–43. http://dx.doi.org/10.2478/fcds-2019-0003.

Повний текст джерела
Анотація:
AbstractIn this article I defend the thesis that modern computer science has a significant philosophical potential, which is expressed in a form of worldview, called here informational worldview (IVW). It includes such theses like: a) each being contains a certain informational content (which may be revealed by computer science concepts, such as code or algorithm), b) the mind is an information processing system (which should be modeled by means of data processing systems), c) cognition is a type of computation. These (pre)philosophical theses are accepted in many sciences (e.g. in cognitive science), and this is both an expression and strengthening of the IWV. After a general discussion of the relations between philosophy, particular sciences and the worldview, and then the presentation of the basic assumptions and theses of the IWV, I analyze a certain specification of thesis b) expressed in the statement that “the mind is the Turing machine”. I distinguish three concepts of mind (static, variable and minimal) and explain how each of them is connected with the concept of the Turing machine.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Setiawan, Aditya, Reza Avrizal, and Risma Nurul Auliya. "Implementation of Motorcycle Rental Information System at the Trans Jaya Tangerang." SYSTEMATICS 2, no. 2 (August 1, 2020): 72–78. http://dx.doi.org/10.35706/sys.v2i2.3847.

Повний текст джерела
Анотація:
The growth in science and technology is accelerating, resulting in a competition that is becoming more intense. Advanced science and technology are obviously also in contact with computers. Computer use can minimize potential errors in data processing compared with manual data processing. The development of these sciences and technologies also encourages companies to improve corporate performance, and one of their uses is the creation of the information systems that companies need. The purpose of this study is to design and build a desktop java-based motorbike rental information system for Koperasi Trans Jaya Tangerang. The methodology used by researchers was grounded research. The methods of data collection used in this study are interviews, observations, and library studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gough, T. G. "Data Processing Methods." Data Processing 27, no. 5 (June 1985): 51. http://dx.doi.org/10.1016/0011-684x(85)90145-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Richards, B. "Data processing mathematics." Data Processing 28, no. 3 (April 1986): 162. http://dx.doi.org/10.1016/0011-684x(86)90015-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lepper, AM. "Data Processing Budgets." Data Processing 28, no. 2 (March 1986): 103. http://dx.doi.org/10.1016/0011-684x(86)90114-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Voropaieva, A., G. Stupak, and O. Zhabko. "INFORMATION TRANSMISSION ALGORITHMS FOR INFRASTRUCTURE COMPUTER-INTEGRATED DATA PROCESSING SYSTEMS." Naukovyi visnyk Donetskoho natsionalnoho tekhnichnoho universytetu 1(6), no. 2(7) (2021): 14–23. http://dx.doi.org/10.31474/2415-7902-2021-1(6)-2(7)-14-23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

ORLOV, GRIGORY A., ANDREY V. KRASOV, and ARTEM M. GELFAND. "THE USE OF BIG DATA IN THE ANALYSIS OF BIG DATA IN COMPUTER NETWORKS." H&ES Research 12, no. 4 (2020): 76–84. http://dx.doi.org/10.36724/2409-5419-2020-12-4-76-84.

Повний текст джерела
Анотація:
The concept of Big Data includes the totality of all data sets, the total size of which is several times larger than the capabilities of conventional databases.it is also necessary to note the use of non-classical data processing methods. For example, in the management, analysis of information received, or simply storage. Big Data algorithms have emerged in parallel with the introduction of the first high-performance servers of their kind, such as the mainframe, which have sufficient resources required for operational information processing, as well as corresponding to computer calculations with subsequent analysis. The algorithms are based on performing series-parallel calculations, which significantly increases the speed of performing various tasks. Entrepreneurs and scientists are interested in Big Data, who are concerned with issues related to not only high-quality, but also up-to-date interpretation of data, as well as creating innovative tools for working with them. A huge amount of data is processed in order for the end user to get the results they need for their further effective use. Big Data enables companies to expand the number of their customers, attract new target audiences, and also helps them implement projects that will be in demand not only among current customers, but also attract new ones. Active implementation and subsequent use of Big Data correspond to the solution of these problems. In this paper, we compare the main types of databases and analyze intrusion detection using the example of distributed information system technologies for processing Big Data. Timely detection of intrusions into data processing systems is necessary to take measures to preserve the confidentiality and integrity of data, as well as to correctly correct errors and improve the protection of the data processing system.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Scherr, A. L. "Distributed data processing." IBM Systems Journal 38, no. 2.3 (1999): 354–74. http://dx.doi.org/10.1147/sj.382.0354.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Xi, Taizheng Chen, Dongwei Li, and Shiqi Yu. "Processing Methods for Digital Image Data Based on the Geographic Information System." Complexity 2021 (June 22, 2021): 1–12. http://dx.doi.org/10.1155/2021/2319314.

Повний текст джерела
Анотація:
Digital image data processing is mainly to input digital image data into a computer to complete the conversion of a continuous spatially distributed image model into a discrete digital model so that the computer can identify, process, and store the processing process of digital image information. Geographic information system (GIS) is a computer system that integrates multiple forms of information expression, and it integrates functions such as collection, processing, transmission, storage, management, analysis, expression, and query retrieval, which can quickly discover the spatial distribution of things and their attributes and can express the results accurately and vividly in various intuitive forms. Therefore, on the basis of summarizing and analyzing previous research works, this paper expounded the research status and significance of processing methods for digital image data, elaborated the development background, current status, and future challenges of the GIS technology, introduced the methods and principles of permutation matrix algorithm and subimage averaging method, constructed the processing model for digital image data based on GIS, analyzed the data structure and its database establishment for digital image, proposed the processing methods for digital image data based on GIS, performed the enhancement processing and calculation classification of digital image data, and finally conducted a case analysis and its result discussion. The study results show that the proposed processing methods for digital image data based on GIS can perform analogue-to-digital conversion of continuous images, complete the steps of sampling, layering, and quantization, and then encode the obtained discrete digital signal into the computer to form an in-plane collection of pixels; this processing method can also organically combine spatial information and image data and identify, process, and store digital image data from both spatial and attribute aspects. The study results of this paper provide a reference for further research on the processing methods for digital image data based on GIS.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Golubev, Alexandr, Peter Bogatencov, and Grigore Secrieru. "DICOM data processing optimization in medical information systems." Scalable Computing: Practice and Experience 19, no. 2 (May 10, 2018): 189–201. http://dx.doi.org/10.12694/scpe.v19i2.1399.

Повний текст джерела
Анотація:
The problem of storage and visualization of medical images collected by various medical equipment is actual for latest 10 years for every medical institution. On the other hand, access to the medical investigation datasets and solving the problem of personal patient data security is important for scientific community and institutions that require this data. "DICOM Network" project was developed for solving these problems for different actors in the system based on the various customized roles. This article describes the problems and possible solutions for optimization of medical images storing, providing stable and secure access, based on the distributed warehouse for huge volumes of data with different levels of access. .
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Malkić, Jasmin, and Nermin Sarajlić. "INTERDISCIPLINARY APPLICATION OF ALGORITHMS FOR DATA MINING." Journal Human Research in Rehabilitation 3, no. 2 (September 2013): 6–9. http://dx.doi.org/10.21554/hrr.091303.

Повний текст джерела
Анотація:
Interdisciplinary application of data mining is linked with the ability to receive and process the large amounts of data. Although even the first computers could help in executing the tasks that required accuracy and reliability atypical to the human way of information processing, only increasing the speed of computer processors and advances in computer science have introduced the possibility that computers can play a more active role in decision making. Applications of these features are found in medicine, where data mining is used in clinical trials to determine the factors that influence health, and examine the effectiveness of medical treatments. With its ability to detect patterns and similarities within the data, data mining can help determine the statistical significance, pointing to the complex combinations of factors that cause certain effect. Such approach opens the opportunities of deeper analysis than it is the case with reliance solely on statistics.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Widya, Moh Anshori Aris, and Nur Wakhidah. "PWA (Progressive Web Apps)-Based PWA-Based Mobile Service Application Development." NEWTON: Networking and Information Technology 1, no. 3 (February 16, 2022): 138–44. http://dx.doi.org/10.32764/newton.v1i3.2064.

Повний текст джерела
Анотація:
In the rapid development of science and technology, people are encouraged to use computers. Computers are one of the human tools used for data processing, both in government agencies, education, health, private and other businesses. Currently information is needed, with the need for information, a data processing system using a computer is applied to make it easier for users to perform computerized data processing. The existence of Integrated Service Posts (Posyandu) in Indonesia is now almost evenly distributed to the village level and even RT / RW. This proves that the role and care of the community in health services is very important, and is not only the responsibility of the government.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhang, Longjun, Kun Liu, Ilyar Ilham, and Jiaxin Fan. "Application of Data Mining Technology Based on Data Center." Journal of Physics: Conference Series 2146, no. 1 (January 1, 2022): 012017. http://dx.doi.org/10.1088/1742-6596/2146/1/012017.

Повний текст джерела
Анотація:
Abstract Data mining technology refers to the use of mathematics, statistics, computer science and other methods to process a large amount of information to obtain useful conclusions and provide valuable decisions for people. With the rapid development and popularization of the Internet era and the more and more extensive application of computers in various fields, data mining technology has become a hot research field in today’s society. Based on the data center, this paper studies the data mining technology. Firstly, this paper expounds the definition of data mining, and studies the process of data mining and the steps of processing data. Then, this paper also designs and studies the framework of data mining, and tests the performance of the algorithm. Finally, the test results show that data mining technology can well meet the target requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Campagni, Renza, Donatella Merlini, and Maria Cecilia Verri. "Analysing Computer Science Courses over Time." Data 7, no. 2 (January 24, 2022): 14. http://dx.doi.org/10.3390/data7020014.

Повний текст джерела
Анотація:
In this paper we consider courses of a Computer Science degree in an Italian university from the year 2011 up to 2020. For each course, we know the number of exams taken by students during a given calendar year and the corresponding average grade; we also know the average normalized value of the result obtained in the entrance test and the distribution of students according to the gender. By using classification and clustering techniques, we analyze different data sets obtained by pre-processing the original data with information about students and their exams, and highlight which courses show a significant deviation from the typical progression of the courses of the same teaching year, as time changes. Finally, we give heat maps showing the order in which exams were taken by graduated students. The paper shows a reproducible methodology that can be applied to any degree course with a similar organization, to identify courses that present critical issues over time. A strength of the work is to consider courses over time as variables of interest, instead of the more frequently used personal and academic data concerning students.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Morze, Nataliia, and Tetiana Efymenko. "WHY SHOULD FUTURE COMPUTER SCIENCE TEACHERS STUDY COMPUTER DESIGN?" OPEN EDUCATIONAL E-ENVIRONMENT OF MODERN UNIVERSITY, no. 13 (2022): 74–88. http://dx.doi.org/10.28925/2414-0325.2022.136.

Повний текст джерела
Анотація:
Teaching the discipline "Fundamentals of Computer Design" to students of Computer Science specialties of pedagogical universities is connected with modern educational trends arising from the requirements of the labor market. The paper substantiates the need to introduce this course in the educational process of training of pre-service Computer Science teachers, taking into account the professional teacher standard and the state standard in Computer Science. It is noted that the teaching of this course contributes to the formation of important interdisciplinary and subject professional competencies, including information and digital competence, which are necessary for both a modern specialist in the field of ICT and pre-service Computer Science teachers. The connection of this course with other disciplines that should be taught to pre-service Computer Science teachers according to the curriculum is presented. A survey of 1st year students majoring in "Informatics" was conducted to determine their knowledge and skills that they received before entering the university, their attitude to the use of various software used in the modern market of information and communication technologies for processing graphic data. The analysis of the respondents' answers showed what kind of graphic editors they use, which in turn allowed us to conclude that the conditions for pre-service teachers to achieve relevant competencies in computer design have changed.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Lv, Feng, and Hongmin Liu. "Wearable-Based Virtual Display Information Processing and Data Fusion Research." Mobile Information Systems 2022 (May 18, 2022): 1–10. http://dx.doi.org/10.1155/2022/3644038.

Повний текст джерела
Анотація:
This paper is combined with the existing acquisition module design to achieve the upper computer online information monitoring system. Real-time mapping technology is adopted to change the trend of data collection potential for real-time tracking and monitoring. The monitoring data can be predicted and analyzed by changing trend, at the same time combined with SQL2008 database technology, user login system, registration system, monitoring system, data query, and data storage system. The integration and other functions are improved, so that the system not only has the advantages of information management platform but also realizes the remote client base matching layer wireless information real-time monitoring function. Data fusion technology refers to the information processing technology that uses computer to automatically analyze and synthesize some observation information obtained in time and sequence under certain criteria, so as to complete the required decision-making and evaluation tasks. The intelligent wearable online information monitoring system designed in this paper realizes wireless sensor network, to some extent feedback and monitoring of underlying real information. Through the corresponding information processing and data fusion, the user can easily and clearly get product information. Based on the existing 80 sets of data, the experiment trains and extracts 320 feature vectors, which verify the effectiveness of the method.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Smith, Daniel L. "Managing for Quality Data Processing." Journal of Information Systems Management 3, no. 2 (January 1986): 40–42. http://dx.doi.org/10.1080/07399018608965242.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

de Almeida, Ulisses Barres, Benno Bodmann, Paolo Giommi, and Carlos H. Brandt. "The Brazilian Science Data Center (BSDC)." International Journal of Modern Physics: Conference Series 45 (January 2017): 1760075. http://dx.doi.org/10.1142/s2010194517600758.

Повний текст джерела
Анотація:
Astrophysics and Space Science are becoming increasingly characterised by what is now known as “big data”, the bottlenecks for progress partly shifting from data acquisition to “data mining”. Truth is that the amount and rate of data accumulation in many fields already surpasses the local capabilities for its processing and exploitation, and the efficient conversion of scientific data into knowledge is everywhere a challenge. The result is that, to a large extent, isolated data archives risk being progressively likened to “data graveyards”, where the information stored is not reused for scientific work. Responsible and efficient use of these large data-sets means democratising access and extracting the most science possible from it, which in turn signifies improving data accessibility and integration. Improving data processing capabilities is another important issue specific to researchers and computer scientists of each field. The project presented here wishes to exploit the enormous potential opened up by information technology at our age to advance a model for a science data center in astronomy which aims to expand data accessibility and integration to the largest possible extent and with the greatest efficiency for scientific and educational use. Greater access to data means more people producing and benefiting from information, whereas larger integration of related data from different origins means a greater research potential and increased scientific impact. The project of the BSDC is preoccupied, primarily, with providing tools and solutions for the Brazilian astronomical community. It nevertheless capitalizes on extensive international experience, and is developed in full cooperation with the ASI Science Data Center (ASDC), from the Italian Space Agency, granting it an essential ingredient of internationalisation. The BSDC is Virtual Observatory-complient and part of the “Open Universe”, a global initiative built under the auspices of the United Nations.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Neubert, Sebastian, André Geißler, Thomas Roddelkopf, Regina Stoll, Karl-Heinz Sandmann, Julius Neumann, and Kerstin Thurow. "Multi-Sensor-Fusion Approach for a Data-Science-Oriented Preventive Health Management System: Concept and Development of a Decentralized Data Collection Approach for Heterogeneous Data Sources." International Journal of Telemedicine and Applications 2019 (October 8, 2019): 1–18. http://dx.doi.org/10.1155/2019/9864246.

Повний текст джерела
Анотація:
Investigations in preventive and occupational medicine are often based on the acquisition of data in the customer’s daily routine. This requires convenient measurement solutions including physiological, psychological, physical, and sometimes emotional parameters. In this paper, the introduction of a decentralized multi-sensor-fusion approach for a preventive health-management system is described. The aim is the provision of a flexible mobile data-collection platform, which can be used in many different health-care related applications. Different heterogeneous data sources can be integrated and measured data are prepared and transferred to a superordinated data-science-oriented cloud-solution. The presented novel approach focuses on the integration and fusion of different mobile data sources on a mobile data collection system (mDCS). This includes directly coupled wireless sensor devices, indirectly coupled devices offering the datasets via vendor-specific cloud solutions (as e.g., Fitbit, San Francisco, USA and Nokia, Espoo, Finland) and questionnaires to acquire subjective and objective parameters. The mDCS functions as a user-specific interface adapter and data concentrator decentralized from a data-science-oriented processing cloud. A low-level data fusion in the mDCS includes the synchronization of the data sources, the individual selection of required data sets and the execution of pre-processing procedures. Thus, the mDCS increases the availability of the processing cloud and in consequence also of the higher level data-fusion procedures. The developed system can be easily adapted to changing health-care applications by using different sensor combinations. The complex processing for data analysis can be supported and intervention measures can be provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Gu, Lin, Deze Zeng, Peng Li, and Song Guo. "Cost Minimization for Big Data Processing in Geo-Distributed Data Centers." IEEE Transactions on Emerging Topics in Computing 2, no. 3 (September 2014): 314–23. http://dx.doi.org/10.1109/tetc.2014.2310456.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Grelck, Clemens, and Cédric Blom. "Resource-Aware Data Parallel Array Processing." International Journal of Parallel Programming 48, no. 4 (June 9, 2020): 652–74. http://dx.doi.org/10.1007/s10766-020-00664-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Yuan, Chunyan. "Computer Information Processing System Based on RFID Internet-of-Things Encryption Technology." Scientific Programming 2022 (September 14, 2022): 1–13. http://dx.doi.org/10.1155/2022/4588493.

Повний текст джерела
Анотація:
With the increasing development of information science and technology and the vigorous use and promotion of new technologies, profound changes have taken place in all aspects of our daily life. With this huge change, the IoT industry was born. And it analyzes and processes the large amount of data generated between them, and finally, it helps the development of the economy. The RFID system studied in this paper is the radio frequency identification system, which is an automatic signal identification system. The relationship between the RFID system and the Internet of Things is that the former will obtain a large amount of Internet of Things information data by identifying the Internet of Things. However, it is difficult to guarantee the analysis and processing of data and the security of data in the RFID system. This paper aims to study the effective processing and security guarantee of a large amount of data obtained by the RFID system after processing the identification of the Internet of Things. It is expected to overcome the problems of conventional related art. This paper proposes the encryption technology for the Internet of Things RFID system, as well as the corresponding algorithm, and establishes a processing system for information data. The experimental results of this paper show that the cryptographic mechanism run by the algorithm PECC has better security performance compared with other cryptographic mechanisms, and its computational complexity can be reduced by 28.35%.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Chen, Meixi. "Accounting Data Encryption Processing Based on Data Encryption Standard Algorithm." Complexity 2021 (June 4, 2021): 1–12. http://dx.doi.org/10.1155/2021/7212688.

Повний текст джерела
Анотація:
With the application of computer and network technology in the field of accounting, the development of accounting informationization is an inevitable trend, and the construction of accounting statement data into the data warehouse will be the basis of intelligent decision-making. The complexity of industry accounting statements and the arbitrariness and diversity of users’ needs for obtaining information using statements limit the development, popularization, and application of industry accounting statements. As a block encryption algorithm, the Data Encryption Standard (DES) algorithm uses 64-bit packet data for encryption and decryption. Each eighth bit of the key is used as a parity bit; that is, the actual key length is 56 bits. Encryption and decryption use the same algorithm structure, but the order in which the subkeys are used is reversed. Under the control of the subkey, inputting 64-bit plaintext can produce 64-bit ciphertext output; otherwise, inputting 64-bit ciphertext can produce 64-bit plaintext output. The confidentiality of the DES algorithm depends on the key, and only a very small number of keys are considered weak keys, which can be easily avoided in practical applications. The 3DES algorithm is a cascade of the DES algorithm, and its encryption process is based on the DES algorithm principle. This article explains the encryption process of the DES algorithm and introduces the composition of the 3DES algorithm. The experimental results show that the 3DES encryption algorithm still has a better encryption effect and “avalanche effect” than before the improvement. In addition, for the 3DES algorithm, its encryption efficiency has not been greatly affected. The 3DES encryption algorithm achieves one encryption process at a time to some extent, can effectively resist exhaustive search attacks, and enhance the security of the DES algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Vahini Ezhilraman, S., and Sujatha Srinivasan. "State of the art in image processing & big data analytics: issues and challenges." International Journal of Engineering & Technology 7, no. 3.3 (June 8, 2018): 195. http://dx.doi.org/10.14419/ijet.v7i2.33.13885.

Повний текст джерела
Анотація:
Image processing, in the contemporary domain, is now emerging as a novel and an innovative space in computing research and applications. Today, the discipline of “computer science” may be termed as “image science”, why because in every aspect of computer application, either science or humanities or management, image processing plays a vital role in varied ways. It is broadly now used in all the industries, organizations, administrative divisions; various social organizations, economic/business institutions, healthcare, defense and so on. Image processing takes images as input and image processing techniques are used to process the images and the output is modified images, video, or collection of text, or features of the images. The resultant output by most image processing techniques creates a huge amount of data which is categorized as Big-data. In this technique, bulky information is processed and stored as either structured or unstructured data as a result of processing images through computing techniques. In turn, Big Data analytics for mining knowledge from data created through image processing techniques has a huge potential in sectors like education, government organizations, healthcare institutions, manufacturing units, finance and banking, centers of retail business. This paper focuses on highlighting the recent innovations made in the field of image processing and Big Data analytics. The integration and interaction of the two broad fields of image processing and Big Data have great potential in various areas. Research challenges identified in the integration and interaction of these two broad fields are discussed and some possible research directions are suggested.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Nguyen Thai, B., and A. Olasz. "RASTER DATA PARTITIONING FOR SUPPORTING DISTRIBUTED GIS PROCESSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W3 (August 20, 2015): 543–51. http://dx.doi.org/10.5194/isprsarchives-xl-3-w3-543-2015.

Повний текст джерела
Анотація:
In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Krist´of et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it’s inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Dill-McFarland, Kimberly A., Stephan G. König, Florent Mazel, David C. Oliver, Lisa M. McEwen, Kris Y. Hong, and Steven J. Hallam. "An integrated, modular approach to data science education in microbiology." PLOS Computational Biology 17, no. 2 (February 25, 2021): e1008661. http://dx.doi.org/10.1371/journal.pcbi.1008661.

Повний текст джерела
Анотація:
We live in an increasingly data-driven world, where high-throughput sequencing and mass spectrometry platforms are transforming biology into an information science. This has shifted major challenges in biological research from data generation and processing to interpretation and knowledge translation. However, postsecondary training in bioinformatics, or more generally data science for life scientists, lags behind current demand. In particular, development of accessible, undergraduate data science curricula has the potential to improve research and learning outcomes as well as better prepare students in the life sciences to thrive in public and private sector careers. Here, we describe the Experiential Data science for Undergraduate Cross-Disciplinary Education (EDUCE) initiative, which aims to progressively build data science competency across several years of integrated practice. Through EDUCE, students complete data science modules integrated into required and elective courses augmented with coordinated cocurricular activities. The EDUCE initiative draws on a community of practice consisting of teaching assistants (TAs), postdocs, instructors, and research faculty from multiple disciplines to overcome several reported barriers to data science for life scientists, including instructor capacity, student prior knowledge, and relevance to discipline-specific problems. Preliminary survey results indicate that even a single module improves student self-reported interest and/or experience in bioinformatics and computer science. Thus, EDUCE provides a flexible and extensible active learning framework for integration of data science curriculum into undergraduate courses and programs across the life sciences.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Thessen, Anne E., Hong Cui, and Dmitry Mozzherin. "Applications of Natural Language Processing in Biodiversity Science." Advances in Bioinformatics 2012 (May 22, 2012): 1–17. http://dx.doi.org/10.1155/2012/391574.

Повний текст джерела
Анотація:
Centuries of biological knowledge are contained in the massive body of scientific literature, written for human-readability but too big for any one person to consume. Large-scale mining of information from the literature is necessary if biology is to transform into a data-driven science. A computer can handle the volume but cannot make sense of the language. This paper reviews and discusses the use of natural language processing (NLP) and machine-learning algorithms to extract information from systematic literature. NLP algorithms have been used for decades, but require special development for application in the biological realm due to the special nature of the language. Many tools exist for biological information extraction (cellular processes, taxonomic names, and morphological characters), but none have been applied life wide and most still require testing and development. Progress has been made in developing algorithms for automated annotation of taxonomic text, identification of taxonomic names in text, and extraction of morphological character information from taxonomic descriptions. This manuscript will briefly discuss the key steps in applying information extraction tools to enhance biodiversity science.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Qiu, Yongxiao, Guanghui Du, and Song Chai. "A Novel Algorithm for Distributed Data Stream Using Big Data Classification Model." International Journal of Information Technology and Web Engineering 15, no. 4 (October 2020): 1–17. http://dx.doi.org/10.4018/ijitwe.2020100101.

Повний текст джерела
Анотація:
In order to solve the problem of real-time detection of power grid equipment anomalies, this paper proposes a data flow classification model based on distributed processing. In order to realize distributed processing of power grid data flow, a local node mining method and a global mining mode based on uneven data flow classification are designed. A data stream classification model based on distributed processing is constructed, then the corresponding data sequence is selected and formatted abstractly, and the local node mining method and global mining mode under this model are designed. In the local node miner, the block-to-block mining strategy is implemented by acquiring the current data blocks. At the same time, the expression and real-time maintenance of local mining patterns are completed by combining the clustering algorithm, thus improving the transmission rate of information between each node and ensuring the timeliness of the overall classification algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Mihai, Dana, and Mihai Mocanu. "Processing GIS Data Using Decision Trees and an Inductive Learning Method." International Journal of Machine Learning and Computing 11, no. 6 (November 2021): 393–98. http://dx.doi.org/10.18178/ijmlc.2021.11.6.1067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Tjurmin, A. V. "Program package for processing of autoradiography data." Computer Methods and Programs in Biomedicine 29, no. 1 (May 1989): 71–72. http://dx.doi.org/10.1016/0169-2607(89)90092-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Grall, Y., J. F. Legargasson, F. Rigaudière, and M. Pizzato. "Data processing software for electrophysiological visual exploration." Computer Methods and Programs in Biomedicine 28, no. 2 (February 1989): 101–9. http://dx.doi.org/10.1016/0169-2607(89)90166-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ng, S. L., T. K. Ng, and T. B. Ng. "Cubic regression analysis for radioimmunoassay data processing." Computer Methods and Programs in Biomedicine 34, no. 4 (April 1991): 273–74. http://dx.doi.org/10.1016/0169-2607(91)90111-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Geetanjali, R., Charles Galaxyaan, and M. Niranjanamurthy. "How to Overcoming Cyber Security Challenges Using Data Science." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4116–21. http://dx.doi.org/10.1166/jctn.2020.9029.

Повний текст джерела
Анотація:
Data science, in its utmost essential form, is wholly about realizing. It encompasses reviewing, Processing, and Extracting valued identifications from a set of information. However, the word and procedure have-been about for quite a few time spans, it was principally a subsection of computer science. Currently, it has established into a self-determining field. One of the modern application of data-science includes cyber security. Cyber Security is an emerged and a saturated field which is omnipresent. It’s sound strange to study Data-Science with the expectations of refining cybersecurity, but in realism, it makes a lot of intellect. This research paper evaluates the progresses in using Data science for cyber security, the categories of cybersecurity threats and challenges posed. The authors have analyzed how data science concepts are being used to solve these challenges and detect and prevent attacks real-time.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Tianxing, Man, Ildar Raisovich Baimuratov, and Natalia Alexandrovna Zhukova. "A Knowledge-Oriented Recommendation System for Machine Learning Algorithm Finding and Data Processing." International Journal of Embedded and Real-Time Communication Systems 10, no. 4 (October 2019): 20–38. http://dx.doi.org/10.4018/ijertcs.2019100102.

Повний текст джерела
Анотація:
With the development of the Big Data, data analysis technology has been actively developed, and now it is used in various subject fields. More and more non-computer professional researchers use machine learning algorithms in their work. Unfortunately, datasets can be messy and knowledge cannot be directly extracted, which is why they need preprocessing. Because of the diversity of the algorithms, it is difficult for researchers to find the most suitable algorithm. Most of them choose algorithms through their intuition. The result is often unsatisfactory. Therefore, this article proposes a recommendation system for data processing. This system consists of an ontology subsystem and an estimation subsystem. Ontology technology is used to represent machine learning algorithm taxonomy, and information-theoretic based criteria are used to form recommendations. This system helps users to apply data processing algorithms without specific knowledge from the data science field.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Erraji, Abdelhak, Abderrahim Maizate, Mohamed Ouzzif, and Zouhair Ibn BATOUTA. "Migrating Data Semantic from Relational Database System To NOSQL Systems to Improve Data Quality for Big Data Analytics System." ECS Transactions 107, no. 1 (April 24, 2022): 19495–503. http://dx.doi.org/10.1149/10701.19495ecst.

Повний текст джерела
Анотація:
Today, in the computer science world, data has become an essential hub for information processing in general. These data continue to progress in a progressive and exponential way, especially in storage and its technologies. The term big data as we use it for mass volumes of data, offers important techniques for processing, analyzing, and proposing useful information for decision making. NOSQL databases become a modern and indispensable technology to use, to provide scalability to support large data. This is why there is always this challenge for organizations to transform their existing databases to NOSQL databases by considering the heterogeneity and complexity of relational data. In this paper, we propose an approach for the migration of a relational database to another NOSQL. This method has two phases, the first is to transform the relational database to a NOSQL database, and the second is to enhance and improve the quality of the data with cleanup processes to provide and prepare them for big data analytics systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Papademetriou, Rallis C. "Data processing using information theory functionals." Kybernetes 27, no. 3 (April 1998): 264–72. http://dx.doi.org/10.1108/03684929810209478.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Oba, Nobuyuki, Tadao Nakamura, and Yoshiharu Shigei. "Signal processing on a parallel pipeline-structured data-flow computer system." Systems and Computers in Japan 17, no. 4 (1986): 9–16. http://dx.doi.org/10.1002/scj.4690170402.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Caruccio, Loredana, Domenico Desiato, Giuseppe Polese, and Genoveffa Tortora. "GDPR Compliant Information Confidentiality Preservation in Big Data Processing." IEEE Access 8 (2020): 205034–50. http://dx.doi.org/10.1109/access.2020.3036916.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Li, Fugui, and Ashutosh Sharma. "Missing Data Filling Algorithm for Big Data-Based Map-Reduce Technology." International Journal of e-Collaboration 18, no. 2 (March 1, 2022): 1–11. http://dx.doi.org/10.4018/ijec.304036.

Повний текст джерела
Анотація:
In big data, the large number of missing values has a serious problem to compute the correct decision. This problem seriously affects the quality of information query, distorts data mining and analysis, and misleads the decisions. Therefore, in order to solve the missing values in the real database, we have pre populated the missing data, and filled in the classification attributes based on the probabilistic reasoning. The reasoning process is completed in Bayesian network to realize the parallelization of big data processing. The proposed algorithm has been presented in the Map-Reduce framework. The experimental results show that the Bayesian network construction method and probabilistic inference are effective for the classification data processing, and the parallelism of algorithm in Hadoop.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Wu, Chunqiong, Bingwen Yan, Rongrui Yu, Zhangshu Huang, Baoqin Yu, Yanliang Yu, Na Chen, and Xiukao Zhou. "Improvement of K-Means Algorithm for Accelerated Big Data Clustering." International Journal of Information Technologies and Systems Approach 14, no. 2 (July 2021): 99–119. http://dx.doi.org/10.4018/ijitsa.2021070107.

Повний текст джерела
Анотація:
With the rapid development of the computer level, especially in recent years, “Internet +,” cloud platforms, etc. have been used in various industries, and various types of data have grown in large quantities. Behind these large amounts of data often contain very rich information, relying on traditional data retrieval and analysis methods, and data management models can no longer meet our needs for data acquisition and management. Therefore, data mining technology has become one of the solutions to how to quickly obtain useful information in today's society. Effectively processing large-scale data clustering is one of the important research directions in data mining. The k-means algorithm is the simplest and most basic method in processing large-scale data clustering. The k-means algorithm has the advantages of simple operation, fast speed, and good scalability in processing large data, but it also often exposes fatal defects in data processing. In view of some defects exposed by the traditional k-means algorithm, this paper mainly improves and analyzes from two aspects.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Taskin, Zehra, and Umut Al. "Natural language processing applications in library and information science." Online Information Review 43, no. 4 (August 12, 2019): 676–90. http://dx.doi.org/10.1108/oir-07-2018-0217.

Повний текст джерела
Анотація:
Purpose With the recent developments in information technologies, natural language processing (NLP) practices have made tasks in many areas easier and more practical. Nowadays, especially when big data are used in most research, NLP provides fast and easy methods for processing these data. The purpose of this paper is to identify subfields of library and information science (LIS) where NLP can be used and to provide a guide based on bibliometrics and social network analyses for researchers who intend to study this subject. Design/methodology/approach Within the scope of this study, 6,607 publications, including NLP methods published in the field of LIS, are examined and visualized by social network analysis methods. Findings After evaluating the obtained results, the subject categories of publications, frequently used keywords in these publications and the relationships between these words are revealed. Finally, the core journals and articles are classified thematically for researchers working in the field of LIS and planning to apply NLP in their research. Originality/value The results of this paper draw a general framework for LIS field and guides researchers on new techniques that may be useful in the field.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Ràfols, Pere, Bram Heijs, Esteban del Castillo, Oscar Yanes, Liam A. McDonnell, Jesús Brezmes, Iara Pérez-Taboada, Mario Vallejo, María García-Altares, and Xavier Correig. "rMSIproc: an R package for mass spectrometry imaging data processing." Bioinformatics 36, no. 11 (February 28, 2020): 3618–19. http://dx.doi.org/10.1093/bioinformatics/btaa142.

Повний текст джерела
Анотація:
Abstract Summary Mass spectrometry imaging (MSI) can reveal biochemical information directly from a tissue section. MSI generates a large quantity of complex spectral data which is still challenging to translate into relevant biochemical information. Here, we present rMSIproc, an open-source R package that implements a full data processing workflow for MSI experiments performed using TOF or FT-based mass spectrometers. The package provides a novel strategy for spectral alignment and recalibration, which allows to process multiple datasets simultaneously. This enables to perform a confident statistical analysis with multiple datasets from one or several experiments. rMSIproc is designed to work with files larger than the computer memory capacity and the algorithms are implemented using a multi-threading strategy. rMSIproc is a powerful tool able to take full advantage of modern computer systems to completely develop the whole MSI potential. Availability and implementation rMSIproc is freely available at https://github.com/prafols/rMSIproc. Contact pere.rafols@urv.cat Supplementary information Supplementary data are available at Bioinformatics online.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Ren, Yizhong. "Data Science Analysis Method Design via Big Data Technology and Attention Neural Network." Mobile Information Systems 2022 (October 11, 2022): 1–8. http://dx.doi.org/10.1155/2022/9915481.

Повний текст джерела
Анотація:
Because of the rapid expansion of big data technology, time series data is on the rise. These time series data include a lot of hidden information, and mining and evaluating hidden information is very important in finance, medical care, and transportation. Time series data forecasting is a data science analysis application, yet present time series data forecasting models do not completely account for the peculiarities of time series data. Traditional machine learning algorithms extract data features through artificially designed rules, while deep learning learns abstract representations of data through multiple processing layers. This not only saves the step of manually extracting features, but also greatly improves generalization performance for model. Therefore, this work utilizes big data technology to collect corresponding time series data and then uses deep learning to study the problem of time series data prediction. This work proposes a time series data prediction analysis network (TSDPANet). First, this work improves the traditional Inception module and proposes a feature extraction module suitable for 2D time series data. In 2D convolution, this solves the inefficiency of time series. Second, the notion of feature attention method for time series features is proposed in this study. The model focuses the neural network’s data on the effectiveness of several measures. The feature attention module is used to assign different weights to different features according to their importance, which can effectively enhance and weaken the features. Third, this work conducts multi-faceted experiments on the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Borodkin, Leonid, Vladimir Vladimirov, and Irina Markovna Garskova. "International Conference "Historical Research in the Context of Data Science: Information Resources, Analytical Methods and Digital Technologies"." Историческая информатика, no. 4 (April 2020): 250–64. http://dx.doi.org/10.7256/2585-7797.2020.4.34747.

Повний текст джерела
Анотація:
The word "data" has recently become one of the key words in the semantic field of modern science. This happened due to a sharp increase in information flows in the economy and social sphere and the ongoing breakthrough in the development of methods and technologies for processing and analyzing data in the context of large-scale digitalization and the need to work with big data. This has led to the rapid development of data science. Historical science has been affected by these processes as well. The article discusses the course and results of the 17th international conference of the interregional association "History and Computer" held under the title "Historical Research in the Context of Data Science: Information Resources, Analytical Methods and Digital Technologies". Researchers from 7 countries took part in the conference held on-line. The article characterized the structure of the conference in details and the most interesting speeches of its participants. There were two plenary meetings, two round tales and 9 sections. The conference results show that historical computer science has entered a new stage of its development and has ceased to be perceived as a kind of "niche" area of historical science. New researchers are being involved in its development, the geography of scientific centers of historical computer science is expanding, their studies touch upon the most important issues of Russian and foreign history.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Douthit, Brian J., Rachel L. Walden, Kenrick Cato, Cynthia P. Coviak, Christopher Cruz, Fabio D'Agostino, Thompson Forbes, et al. "Data Science Trends Relevant to Nursing Practice: A Rapid Review of the 2020 Literature." Applied Clinical Informatics 13, no. 01 (January 2022): 161–79. http://dx.doi.org/10.1055/s-0041-1742218.

Повний текст джерела
Анотація:
Abstract Background The term “data science” encompasses several methods, many of which are considered cutting edge and are being used to influence care processes across the world. Nursing is an applied science and a key discipline in health care systems in both clinical and administrative areas, making the profession increasingly influenced by the latest advances in data science. The greater informatics community should be aware of current trends regarding the intersection of nursing and data science, as developments in nursing practice have cross-professional implications. Objectives This study aimed to summarize the latest (calendar year 2020) research and applications of nursing-relevant patient outcomes and clinical processes in the data science literature. Methods We conducted a rapid review of the literature to identify relevant research published during the year 2020. We explored the following 16 topics: (1) artificial intelligence/machine learning credibility and acceptance, (2) burnout, (3) complex care (outpatient), (4) emergency department visits, (5) falls, (6) health care–acquired infections, (7) health care utilization and costs, (8) hospitalization, (9) in-hospital mortality, (10) length of stay, (11) pain, (12) patient safety, (13) pressure injuries, (14) readmissions, (15) staffing, and (16) unit culture. Results Of 16,589 articles, 244 were included in the review. All topics were represented by literature published in 2020, ranging from 1 article to 59 articles. Numerous contemporary data science methods were represented in the literature including the use of machine learning, neural networks, and natural language processing. Conclusion This review provides an overview of the data science trends that were relevant to nursing practice in 2020. Examinations of such literature are important to monitor the status of data science's influence in nursing practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sun, Hujun. "Big Data Image Processing Based on Coefficient 3D Reconstruction Model." Mobile Information Systems 2022 (April 22, 2022): 1–10. http://dx.doi.org/10.1155/2022/2301795.

Повний текст джерела
Анотація:
In the history of human society, information is an indispensable part of development, and extracting useful data from the huge amount of data can effectively solve some real-life problems. At the same time, with the continuous improvement of modern technology and computer hardware and equipment, the demand for information subjects is getting higher and higher. The field of human research is gradually moving in the direction of multiscientific exploration and deepening of related work. The related information disciplines have been studied by more scholars and experts in the field to a greater extent and put forward more demanding theories and methods. In the process of human social development, things are constantly changing and updating, which makes data mining technology more and more attention. This article first describes the relevant basic theoretical knowledge of 3D reconstruction, and secondly, it analyzes the big data platform technology, which mainly includes the analysis of the big data platform architecture, the description of the Hadoop distributed architecture, and the analysis of the HBase nonrelational database. Finally, it studies the video image feature extraction technology of video content and uses this to study the design of a big data image processing platform for 3D reconstruction models.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Paul, Joseph S., M. R. S. Reddy, and V. Jagadeesh Kumar. "Data processing of stress ECGs using discrete cosine transform." Computers in Biology and Medicine 28, no. 6 (November 1998): 639–58. http://dx.doi.org/10.1016/s0010-4825(98)00042-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Zelichenok, I. "VISUALIZATION AND PROCESSING OF INFORMATION SECURITY EVENTS BASED ON CICIDS DATA 17." Telecom IT 9, no. 4 (December 30, 2021): 49–55. http://dx.doi.org/10.31854/2307-1303-2021-9-4-49-55.

Повний текст джерела
Анотація:
At present, attacks on computer networks continue to develop at a speed that outstrips the ability of information security specialists to create new attack signatures. This article illustrates an approach to preprocessing raw data and visualizing information security events in a live dataset. It is shown how preprocessing and primary knowledge extraction for further use of the processed dataset in machine learning models can be used in the design of machine learning models for intrusion detection systems. A distinctive feature of the work is that the most relevant set CICIDS17 was taken as the studied dataset. Although traditionally considered popular such kits as DARPA2000 and KDD-99, which are more than 20 years old. The article also describes the criteria and characteristics that the set has.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Qi, Xuanye. "Computer Vision-Based Medical Cloud Data System for Back Muscle Image Detection." Computational Intelligence and Neuroscience 2022 (April 30, 2022): 1–8. http://dx.doi.org/10.1155/2022/5951102.

Повний текст джерела
Анотація:
The fast development of image recognition and information technology has influenced people’s life and industry management mode not only in some common fields such as information management, but also has very much improved the working efficiency of various industries. In the healthcare field, the current highly disparate doctor–patient ratio leads to more and more doctors needing to undertake more and more patient treatment tasks. Back muscle image detection can also be considered a task in medical image processing. Similar to medical image processing, back muscle detection requires first processing the back image and extracting semantic features by convolutional neural networks, and then training classifiers to identify specific disease symptoms. To alleviate the workload of doctors in recognizing CT slices and ultrasound detection images and to improve the efficiency of remote communication and interaction between doctors and patients, this paper designs and implements a medical image recognition cloud system based on semantic segmentation of CT images and ultrasound recognition images. Accurate detection of back muscles was achieved using the cloud platform and convolutional neural network algorithm. Upon final testing, the algorithm of this system partially meets the accuracy requirements proposed by the requirements. The medical image recognition system established based on this semantic segmentation algorithm is able to handle all aspects of medical workers and patients in general in a stable manner and can perform image segmentation processing quickly within the required range. Then, this paper explores the effect of muscle activity on the lumbar region based on this system.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії