Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Deep Learning, Database.

Zeitschriftenartikel zum Thema „Deep Learning, Database“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Deep Learning, Database" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Karthick Chaganty, Siva. „Database Failure Prediction Based on Deep Learning Model“. International Journal of Science and Research (IJSR) 10, Nr. 4 (27.04.2021): 83–86. https://doi.org/10.21275/sr21329110526.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Wei, Meihui Zhang, Gang Chen, H. V. Jagadish, Beng Chin Ooi und Kian-Lee Tan. „Database Meets Deep Learning“. ACM SIGMOD Record 45, Nr. 2 (28.09.2016): 17–22. http://dx.doi.org/10.1145/3003665.3003669.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lukic, Vesna, und Marcus Brüggen. „Galaxy Classifications with Deep Learning“. Proceedings of the International Astronomical Union 12, S325 (Oktober 2016): 217–20. http://dx.doi.org/10.1017/s1743921316012771.

Der volle Inhalt der Quelle
Annotation:
AbstractMachine learning techniques have proven to be increasingly useful in astronomical applications over the last few years, for example in object classification, estimating redshifts and data mining. One example of object classification is classifying galaxy morphology. This is a tedious task to do manually, especially as the datasets become larger with surveys that have a broader and deeper search-space. The Kaggle Galaxy Zoo competition presented the challenge of writing an algorithm to find the probability that a galaxy belongs in a particular class, based on SDSS optical spectroscopy data. The use of convolutional neural networks (convnets), proved to be a popular solution to the problem, as they have also produced unprecedented classification accuracies in other image databases such as the database of handwritten digits (MNIST †) and large database of images (CIFAR ‡). We experiment with the convnets that comprised the winning solution, but using broad classifications. The effect of changing the number of layers is explored, as well as using a different activation function, to help in developing an intuition of how the networks function and to see how they can be applied to radio galaxy images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Liu, Rukun, Teng Wang, Yuxue Yang und Bingjie Yu. „Database Development Based on Deep Learning and Cloud Computing“. Mobile Information Systems 2022 (29.04.2022): 1–10. http://dx.doi.org/10.1155/2022/6208678.

Der volle Inhalt der Quelle
Annotation:
In this research, the author develops databases based on deep learning and cloud computing technology. On the basis of designing the overall architecture of the database with the distributed C/S mode as the core, use J2EE (Java 2 Platform, Enterprise Edition) as the development tool, apply Oracle server database, extract data features with in-depth learning technology, allocate data processing tasks based with cloud computing technology, so as to finally complete data fusion and compression. Finally, the overall development of the database is completed by designing the database backup scheme and external encryption. The test results show that the database developed by the above method has low performance loss, can quickly complete the processing of subdatabase and subtable, and can effectively support the distributed storage of data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhou, Lixi, Jiaqing Chen, Amitabh Das, Hong Min, Lei Yu, Ming Zhao und Jia Zou. „Serving deep learning models with deduplication from relational databases“. Proceedings of the VLDB Endowment 15, Nr. 10 (Juni 2022): 2230–43. http://dx.doi.org/10.14778/3547305.3547325.

Der volle Inhalt der Quelle
Annotation:
Serving deep learning models from relational databases brings significant benefits. First, features extracted from databases do not need to be transferred to any decoupled deep learning systems for inferences, and thus the system management overhead can be significantly reduced. Second, in a relational database, data management along the storage hierarchy is fully integrated with query processing, and thus it can continue model serving even if the working set size exceeds the available memory. Applying model deduplication can greatly reduce the storage space, memory footprint, cache misses, and inference latency. However, existing data deduplication techniques are not applicable to the deep learning model serving applications in relational databases. They do not consider the impacts on model inference accuracy as well as the inconsistency between tensor blocks and database pages. This work proposed synergistic storage optimization techniques for duplication detection, page packing, and caching, to enhance database systems for model serving. Evaluation results show that our proposed techniques significantly improved the storage efficiency and the model inference latency, and outperformed existing deep learning frameworks in targeting scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Baimakhanova, A. S., K. M. Berkimbayev, A. K. Zhumadillayeva und E. T. Abdrashova. „Technology of using deep learning algorithms“. Bulletin of the National Engineering Academy of the Republic of Kazakhstan 89, Nr. 3 (15.09.2023): 35–45. http://dx.doi.org/10.47533/2023.1606-146x.30.

Der volle Inhalt der Quelle
Annotation:
Deep learning is a branch of machine learning (machine learning-ML). Deep learning methods utilize high-level model abstraction of nonlinear transformations in large databases. In other areas, the implementation of deep learning architectures has contributed significantly to the development of artificial intelligence. This paper presents recent research on newly applied deep learning algorithms. Convolutional Neural Networks are used in deep learning. Database Management System PostgreSQL object-relational database. The implementation resulted in achieving the set goals and objectives. The method of analyzing the input data is described, the differences between machine learning and deep learning are explained, and an example of classifying an image representing a sign language image using logistic regression, one of the deep learning algorithms, is presented. Deep neural networks can work with the full set of available data better than alternative approaches. During the learning process, the neural network itself determines which features in the data are important and which are not. Artificial neural networks can predict symptoms that humans cannot. Thus, with the help of deep neural networks, we can solve problems that traditional machine learning algorithms cannot perform.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Oh, Jaeho, Mincheol Kim und Sang-Woo Ban. „Deep Learning Model with Transfer Learning to Infer Personal Preferences in Images“. Applied Sciences 10, Nr. 21 (29.10.2020): 7641. http://dx.doi.org/10.3390/app10217641.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a deep convolutional neural network model with transfer learning that reflects personal preferences from inter-domain databases of images having atypical visual characteristics. The proposed model utilized three public image databases (Fashion-MNIST, Labeled Faces in the Wild [LFW], and Indoor Scene Recognition) that include images with atypical visual characteristics in order to train and infer personal visual preferences. The effectiveness of transfer learning for incremental preference learning was verified by experiments using inter-domain visual datasets with different visual characteristics. Moreover, a gradient class activation mapping (Grad-CAM) approach was applied to the proposed model, providing explanations about personal visual preference possibilities. Experiments showed that the proposed preference-learning model using transfer learning outperformed a preference model not using transfer learning. In terms of the accuracy of preference recognition, the proposed model showed a maximum of about 7.6% improvement for the LFW database and a maximum of about 9.4% improvement for the Indoor Scene Recognition database, compared to the model that did not reflect transfer learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Maji, Subhadip, und Smarajit Bose. „CBIR Using Features Derived by Deep Learning“. ACM/IMS Transactions on Data Science 2, Nr. 3 (31.08.2021): 1–24. http://dx.doi.org/10.1145/3470568.

Der volle Inhalt der Quelle
Annotation:
In a Content-based Image Retrieval (CBIR) System, the task is to retrieve similar images from a large database given a query image. The usual procedure is to extract some useful features from the query image and retrieve images that have a similar set of features. For this purpose, a suitable similarity measure is chosen, and images with high similarity scores are retrieved. Naturally, the choice of these features play a very important role in the success of this system, and high-level features are required to reduce the “semantic gap.” In this article, we propose to use features derived from pre-trained network models from a deep-learning convolution network trained for a large image classification problem. This approach appears to produce vastly superior results for a variety of databases, and it outperforms many contemporary CBIR systems. We analyse the retrieval time of the method and also propose a pre-clustering of the database based on the above-mentioned features, which yields comparable results in a much shorter time in most of the cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhou, Xiaoshu, Qide Xiao und Han Wang. „Metamaterials Design Method based on Deep learning Database“. Journal of Physics: Conference Series 2185, Nr. 1 (01.01.2022): 012023. http://dx.doi.org/10.1088/1742-6596/2185/1/012023.

Der volle Inhalt der Quelle
Annotation:
Abstract In recent years, deep learning has risen to the forefront of many fields, overcoming challenges previously considered difficult to solve by traditional methods. In the field of metamaterials, there are significant challenges in the design and optimization of metamaterials, including the need for a large number of labeled data sets and one-to-many mapping when solving inverse problems. Here, we will use deep learning methods to build a metamaterial database to achieve rapid design and analysis methods of metamaterials. These technologies have significantly improved the feasibility of more complex metamaterial designs and provided new metamaterial design and analysis ideas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Yue, Rashmi Sharan Sinha, Shu-Zhi Liu und Seung-Hoon Hwang. „Side-Information-Aided Preprocessing Scheme for Deep-Learning Classifier in Fingerprint-Based Indoor Positioning“. Electronics 9, Nr. 6 (12.06.2020): 982. http://dx.doi.org/10.3390/electronics9060982.

Der volle Inhalt der Quelle
Annotation:
Deep-learning classifiers can effectively improve the accuracy of fingerprint-based indoor positioning. During fingerprint database construction, all received signal strength indicators from each access point are combined without any distinction. Therefore, the database is created and utilised for deep-learning models. Meanwhile, side information regarding specific conditions may help characterise the data features for the deep-learning classifier and improve the accuracy of indoor positioning. Herein, a side-information-aided preprocessing scheme for deep-learning classifiers is proposed in a dynamic environment, where several groups of different databases are constructed for training multiple classifiers. Therefore, appropriate databases can be employed to effectively improve positioning accuracies. Specifically, two kinds of side information, namely time (morning/afternoon) and direction (forward/backward), are considered when collecting the received signal strength indicator. Simulations and experiments are performed with the deep-learning classifier trained on four different databases. Moreover, these are compared with conventional results from the combined database. The results show that the side-information-aided preprocessing scheme allows better success probability than the conventional method. With two margins, the proposed scheme has 6.55% and 5.8% improved performances for simulations and experiments compared to the conventional scheme. Additionally, the proposed scheme, with time as the side information, obtains a higher success probability when the positioning accuracy requirement is loose with larger margin. With direction as the side information, the proposed scheme shows better performance for high positioning precision requirements. Thus, side information such as time or direction is advantageous for preprocessing data in deep-learning classifiers for fingerprint-based indoor positioning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Blank, Sebastian, Florian Wilhelm, Hans-Peter Zorn und Achim Rettinger. „Querying NoSQL with Deep Learning to Answer Natural Language Questions“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9416–21. http://dx.doi.org/10.1609/aaai.v33i01.33019416.

Der volle Inhalt der Quelle
Annotation:
Almost all of today’s knowledge is stored in databases and thus can only be accessed with the help of domain specific query languages, strongly limiting the number of people which can access the data. In this work, we demonstrate an end-to-end trainable question answering (QA) system that allows a user to query an external NoSQL database by using natural language. A major challenge of such a system is the non-differentiability of database operations which we overcome by applying policy-based reinforcement learning. We evaluate our approach on Facebook’s bAbI Movie Dialog dataset and achieve a competitive score of 84.2% compared to several benchmark models. We conclude that our approach excels with regard to real-world scenarios where knowledge resides in external databases and intermediate labels are too costly to gather for non-end-to-end trainable QA systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Shen, Haijie, Yangyuan Li, Xinzhi Tian, Xiaofan Chen, Caihong Li, Qian Bian, Zhenduo Wang und Weihua Wang. „Mass data processing and multidimensional database management based on deep learning“. Open Computer Science 12, Nr. 1 (01.01.2022): 300–313. http://dx.doi.org/10.1515/comp-2022-0251.

Der volle Inhalt der Quelle
Annotation:
Abstract With the rapid development of the Internet of Things, the requirements for massive data processing technology are getting higher and higher. Traditional computer data processing capabilities can no longer deliver fast, simple, and efficient data analysis and processing for today’s massive data processing due to the real-time, massive, polymorphic, and heterogeneous characteristics of Internet of Things data. Mass heterogeneous data of different types of subsystems in the Internet of Things need to be processed and stored uniformly, so the mass data processing method is required to be able to integrate multiple different networks, multiple data sources, and heterogeneous mass data and be able to perform processing on these data. Therefore, this article proposes massive data processing and multidimensional database management based on deep learning to meet the needs of contemporary society for massive data processing. This article has deeply studied the basic technical methods of massive data processing, including MapReduce technology, parallel data technology, database technology based on distributed memory databases, and distributed real-time database technology based on cloud computing technology, and constructed a massive data fusion algorithm based on deep learning. The model and the multidimensional online analytical processing model of the multidimensional database based on deep learning analyze the performance, scalability, load balancing, data query, and other aspects of the multidimensional database based on deep learning. It is concluded that the accuracy of multidimensional database query data is as high as 100%, and the accuracy of the average data query time is only 0.0053 s, which is much lower than the general database query time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Kadhim, Ola Najah, und Mohammed Hasan Abdulameer. „A multimodal biometric database and case study for face recognition based deep learning“. Bulletin of Electrical Engineering and Informatics 13, Nr. 1 (01.02.2024): 677–85. http://dx.doi.org/10.11591/eei.v13i1.6605.

Der volle Inhalt der Quelle
Annotation:
Recently, multimodal biometric systems have garnered a lot of interest for the identification of human identity. The accessibility of the database is one of the contributing elements that impact biometric recognition systems. In their studies, the majority of researchers concentrate on unimodal databases. There was a need to compile a fresh, realistic multimodal biometric database, nonetheless, because there were so few comparable multimodal biometric databases that were publically accessible. This study introduces the MULBv1 multimodal biometric database, which contains homologous biometric traits. The MULBv1 database includes 20 images of each person's face in various poses, facial emotions, and accessories, 20 images of their right hand from various angles, and 20 images of their right iris from various lighting positions. The database contains real multimodal data from 174 people, and all biometrics were accurately collected using the micro camera of the iPhone 14 Pro Max. A face recognition technique is also suggested as a case study using the gathered facial features. In the case study, the deep convolutional neural network (CNN) was used, and the findings were positive. Through several trials, the accuracy was (97.41%).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Li, Dana, Bolette Mikela Vilmun, Jonathan Frederik Carlsen, Elisabeth Albrecht-Beste, Carsten Ammitzbøl Lauridsen, Michael Bachmann Nielsen und Kristoffer Lindskov Hansen. „The Performance of Deep Learning Algorithms on Automatic Pulmonary Nodule Detection and Classification Tested on Different Datasets That Are Not Derived from LIDC-IDRI: A Systematic Review“. Diagnostics 9, Nr. 4 (29.11.2019): 207. http://dx.doi.org/10.3390/diagnostics9040207.

Der volle Inhalt der Quelle
Annotation:
The aim of this study was to systematically review the performance of deep learning technology in detecting and classifying pulmonary nodules on computed tomography (CT) scans that were not from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database. Furthermore, we explored the difference in performance when the deep learning technology was applied to test datasets different from the training datasets. Only peer-reviewed, original research articles utilizing deep learning technology were included in this study, and only results from testing on datasets other than the LIDC-IDRI were included. We searched a total of six databases: EMBASE, PubMed, Cochrane Library, the Institute of Electrical and Electronics Engineers, Inc. (IEEE), Scopus, and Web of Science. This resulted in 1782 studies after duplicates were removed, and a total of 26 studies were included in this systematic review. Three studies explored the performance of pulmonary nodule detection only, 16 studies explored the performance of pulmonary nodule classification only, and 7 studies had reports of both pulmonary nodule detection and classification. Three different deep learning architectures were mentioned amongst the included studies: convolutional neural network (CNN), massive training artificial neural network (MTANN), and deep stacked denoising autoencoder extreme learning machine (SDAE-ELM). The studies reached a classification accuracy between 68–99.6% and a detection accuracy between 80.6–94%. Performance of deep learning technology in studies using different test and training datasets was comparable to studies using same type of test and training datasets. In conclusion, deep learning was able to achieve high levels of accuracy, sensitivity, and/or specificity in detecting and/or classifying nodules when applied to pulmonary CT scans not from the LIDC-IDRI database.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Abbass, Ghida Yousif, und Ali Fadhil Marhoon. „Car license plate segmentation and recognition system based on deep learning“. Bulletin of Electrical Engineering and Informatics 11, Nr. 4 (01.08.2022): 1983–89. http://dx.doi.org/10.11591/eei.v11i4.3434.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence techniques and computer vision techniques dealt with the issue of automatic license plate recognition (ALPR) that has many applications in important research field. In this paper, the method of recognizing the license plates of Iraqi cars was applied based on deep learning techniques convolutional neural network (CNN). The two database built to identifying Iraqi car plates. First database includes 2000 images of Arabic numbers and Arabic letters. Second database conations 1200 images of the Arabic names for Iraqi governorates. This paper used image-processing techniques to segmenting the numbers, letters and words from the car license plate images and then convert them into two databases that used to train the two CNN. These training CNN used to recognizing the vocabulary of the car license plate. The success rate of the numbers, letters and words recognition was 98%. The overall rate of success of this proposed system in all stages was 97%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Giriprasad Gaddam, P., A. Sanjeeva reddy und R. V. Sreehari. „Automatic Classification of Cardiac Arrhythmias based on ECG Signals Using Transferred Deep Learning Convolution Neural Network“. Journal of Physics: Conference Series 2089, Nr. 1 (01.11.2021): 012058. http://dx.doi.org/10.1088/1742-6596/2089/1/012058.

Der volle Inhalt der Quelle
Annotation:
Abstract In the current article, an automatic classification of cardiac arrhythmias is presented using a transfer deep learning approach with the help of electrocardiography (ECG) signal analysis. Now a days, an ECG waveform serves as a powerful tool used for the analysis of cardiac arrhythmias (irregularities). The goal of the present work is to implement an algorithm based on deep learning for classification of different cardiac arrhythmias. Initially, the one dimensional (1-D) ECG signals are transformed to two dimensional (2-D) scalogram images with the help of Continuous Wavelet(CWT). Four different categories of ECG waveform were selected from four PhysioNet MIT-BIH databases, namely arrhythmia database, Normal Sinus Rhythm database, Malignant Ventricular Ectopy database and BIDMC Congestive heart failure database to examine the proposed technique. The major interest of the present study is to develop a transferred deep learning algorithm for automatic categorization of the mentioned four different heart diseases. Final results proved that the 2-D scalogram images trained with a deep convolutional neural network CNN with transfer learning technique (AlexNet) pepped up with a prominent accuracy of 95.67%. Hence, it is worthwhile to say the above stated algorithm demonstrates as an effective automated heart disease detection tool
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Shang, Xiaoran. „Database Oriented Big Data Analysis Engine Based on Deep Learning“. Computational Intelligence and Neuroscience 2022 (31.08.2022): 1–9. http://dx.doi.org/10.1155/2022/4500684.

Der volle Inhalt der Quelle
Annotation:
In recent years, with the development of enterprises to the Internet, the demand for cloud database is also growing, especially how to capture data quickly and efficiently through the database. In order to improve the data structure at all levels in the process of database analysis engine, this paper realizes the accurate construction and rapid analysis of cloud database based on big data analysis engine technology and deep learning wolf pack greedy algorithm. Through the deep learning strategy, a big data analysis engine system based on the deep learning model is constructed. The functions of deep learning technology, wolf greedy algorithm, and data analysis strategy in the cloud database analysis engine system are analyzed, as well as the functions of the whole analysis engine system. Finally, the accuracy and response speed of the cloud database analysis engine system are tested according to the known clustering data. The results show that compared with the traditional data analysis engine system with character search as the core, the database oriented big data analysis engine system based on a deep learning model and wolf swarm greedy algorithm has faster response speed and intelligence. The research application is that the proposed engine system can significantly improve the effect of the analysis engine and greatly improve the retrieval accuracy and analysis efficiency of fixed-point data in the database.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Manoj krishna, M., M. Neelima, M. Harshali und M. Venu Gopala Rao. „Image classification using Deep learning“. International Journal of Engineering & Technology 7, Nr. 2.7 (18.03.2018): 614. http://dx.doi.org/10.14419/ijet.v7i2.7.10892.

Der volle Inhalt der Quelle
Annotation:
The image classification is a classical problem of image processing, computer vision and machine learning fields. In this paper we study the image classification using deep learning. We use AlexNet architecture with convolutional neural networks for this purpose. Four test images are selected from the ImageNet database for the classification purpose. We cropped the images for various portion areas and conducted experiments. The results show the effectiveness of deep learning based image classification using AlexNet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yan, Yu, Shun Yao, Hongzhi Wang und Meng Gao. „Index selection for NoSQL database with deep reinforcement learning“. Information Sciences 561 (Juni 2021): 20–30. http://dx.doi.org/10.1016/j.ins.2021.01.003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Cohen, William, Fan Yang und Kathryn Rivard Mazaitis. „TensorLog: A Probabilistic Database Implemented Using Deep-Learning Infrastructure“. Journal of Artificial Intelligence Research 67 (23.02.2020): 285–325. http://dx.doi.org/10.1613/jair.1.11944.

Der volle Inhalt der Quelle
Annotation:
We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neural-network infrastructure such as Tensorflow or Theano. This leads to a close integration of probabilistic logical reasoning with deep-learning infrastructure: in particular, it enables high-performance deep learning frameworks to be used for tuning the parameters of a probabilistic logic. The integration with these frameworks enables use of GPU-based parallel processors for inference and learning, making TensorLog the first highly parallellizable probabilistic logic. Experimental results show that TensorLog scales to problems involving hundreds of thousands of knowledge-base triples and tens of thousands of examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Gattan, Atif M. „Deep Learning Technique of Sentiment Analysis for Twitter Database“. International Journal of Interactive Mobile Technologies (iJIM) 16, Nr. 01 (18.01.2022): 184–93. http://dx.doi.org/10.3991/ijim.v16i01.27575.

Der volle Inhalt der Quelle
Annotation:
Sentiment investigation is the progression of calculating, recognizing as well as classifying the people mentality stated in the outline of the manuscript and the outlook of the individual concerning the particular subject matter can be investigated with the assist of that statistics. The information is accumulated by way of the assistance of API in favor of classifying of the end outcome based on the information collected as in Negative way, Neutral way and Positive way with the support of the scoring polarity allocated in favor of every statement that are collected. These data are fruitful for discovering and enhancing the consumer requirements and to acquire better tune-up. Major improvement of using the concept of sentiment analysis is to develop the user needs by straightforwardly collecting the information from the outsized set of consumers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Xu, Boyan, Ruichu Cai, Zhenjie Zhang, Xiaoyan Yang, Zhifeng Hao, Zijian Li und Zhihao Liang. „NADAQ: Natural Language Database Querying Based on Deep Learning“. IEEE Access 7 (2019): 35012–17. http://dx.doi.org/10.1109/access.2019.2904720.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Jamshed, Aatif, Bhawna Mallick und Pramod Kumar. „Deep learning-based sequential pattern mining for progressive database“. Soft Computing 24, Nr. 22 (13.05.2020): 17233–46. http://dx.doi.org/10.1007/s00500-020-05015-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Khan, Khalil, Byeong-hee Roh, Jehad Ali, Rehan Ullah Khan, Irfan Uddin, Saqlain Hassan, Rabia Riaz und Nasir Ahmad. „PHND: Pashtu Handwritten Numerals Database and deep learning benchmark“. PLOS ONE 15, Nr. 9 (02.09.2020): e0238423. http://dx.doi.org/10.1371/journal.pone.0238423.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Kiran, Aqsa, Shahzad Ahmad Qureshi, Asifullah Khan, Sajid Mahmood, Muhammad Idrees, Aqsa Saeed, Muhammad Assam, Mohamad Reda A. Refaai und Abdullah Mohamed. „Reverse Image Search Using Deep Unsupervised Generative Learning and Deep Convolutional Neural Network“. Applied Sciences 12, Nr. 10 (13.05.2022): 4943. http://dx.doi.org/10.3390/app12104943.

Der volle Inhalt der Quelle
Annotation:
Reverse image search has been a vital and emerging research area of information retrieval. One of the primary research foci of information retrieval is to increase the space and computational efficiency by converting a large image database into an efficiently computed feature database. This paper proposes a novel deep learning-based methodology, which captures channel-wise, low-level details of each image. In the first phase, sparse auto-encoder (SAE), a deep generative model, is applied to RGB channels of each image for unsupervised representational learning. In the second phase, transfer learning is utilized by using VGG-16, a variant of deep convolutional neural network (CNN). The output of SAE combined with the original RGB channel is forwarded to VGG-16, thereby producing a more effective feature database by the ensemble/collaboration of two effective models. The proposed method provides an information rich feature space that is a reduced dimensionality representation of the image database. Experiments are performed on a hybrid dataset that is developed by combining three standard publicly available datasets. The proposed approach has a retrieval accuracy (precision) of 98.46%, without using the metadata of images, by using a cosine similarity measure between the query image and the image database. Additionally, to further validate the proposed methodology’s effectiveness, image quality has been degraded by adding 5% noise (Speckle, Gaussian, and Salt pepper noise types) in the hybrid dataset. Retrieval accuracy has generally been found to be 97% for different variants of noise
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Bianchi, Alexander, Andrew Chai, Vincent Corvinelli, Parke Godfrey, Jarek Szlichta und Calisto Zuzarte. „Db2une: Tuning Under Pressure via Deep Learning“. Proceedings of the VLDB Endowment 17, Nr. 12 (August 2024): 3855–68. http://dx.doi.org/10.14778/3685800.3685811.

Der volle Inhalt der Quelle
Annotation:
Modern database systems including IBM Db2 have numerous parameters, "knobs," that require precise configuration to achieve optimal workload performance. Even for experts, manually "tuning" these knobs is a challenging process. We present Db2une, an automatic query-aware tuning system that leverages deep learning to maximize performance while minimizing resource usage. Via a specialized transformer-based query-embedding pipeline we name QBERT, Db2une generates context-aware representations of query workloads to feed as input to a stability-oriented, on-policy deep reinforcement learning model. In Db2une, we introduce a multi-phased, database meta-data driven training approach---which incorporates cost estimates, interpolation of these costs, and database statistics---to efficiently discover optimal tuning configurations without the need to execute queries. Thus, our model can scale to very large workloads, for which executing queries would be prohibitively expensive. Through experimental evaluation, we demonstrate Db2une's efficiency and effectiveness over a variety of workloads. We compare it against the state-of-the-art query-aware tuning systems and show that the system provides recommendations that surpass those of IBM experts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Kshirod, Kshirod Sarmah. „Speaker Diarization with Deep Learning Techniques“. Turkish Journal of Computer and Mathematics Education (TURCOMAT) 11, Nr. 3 (15.12.2020): 2570–82. http://dx.doi.org/10.61841/turcomat.v11i3.14309.

Der volle Inhalt der Quelle
Annotation:
Speaker diarization is a task to identify the speaker when different speakers spoke in an audio or video recording environment. Artificial intelligence (AI) fields have effectively used Deep Learning (DL) to solve a variety of real-world application challenges. With effective applications in a wide range of subdomains, such as natural language processing, image processing, computer vision, speech and speaker recognition, and emotion recognition, cyber security, and many others, DL, a very innovative field of Machine Learning (ML), that is quickly emerging as the most potent machine learning technique. DL techniques have outperformed conventional approaches recently in speaker diarization as well as speaker recognition. The technique of assigning classes to speech recordings that correspond to the speaker's identity is known as speaker diarization, and it allows one to determine who spoke when. A crucial step in speech processing is speaker diarization, which divides an audio recording into different speaker areas. In-depth analysis of speaker diarization utilizing a variety of deep learning algorithms that are presented in this research paper. NIST-2000 CALLHOME and our in-house database ALSD-DB are the two voice corpora we used for this study's tests. TDNN-based embeddings with x-vectors, LSTM-based embeddings with d-vectors, and lastly embeddings fusion of both x-vector and d-vector are used in the tests for the basic system. For the NIST-2000 CALLHOME database, LSTM based embeddings with d-vector and embeddings integrating both x-vector and d-vector exhibit improved performance with DER of 8.25% and 7.65%, respectively, and of 10.45% and 9.65% for the local ALSD-DB database
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Meng, Yang, Guoxin Liang und Mei Yue. „Deep Learning-Based Arrhythmia Detection in Electrocardiograph“. Scientific Programming 2021 (13.05.2021): 1–7. http://dx.doi.org/10.1155/2021/9926769.

Der volle Inhalt der Quelle
Annotation:
This study aimed to explore the application of electrocardiograph (ECG) in the diagnosis of arrhythmia based on the deep convolutional neural network (DCNN). ECG was classified and recognized with the DCNN. The specificity (Spe), sensitivity (Sen), accuracy (Acc), and area under curve (AUC) of the DCNN were evaluated in the Chinese Cardiovascular Disease Database (CCDD) and Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database, respectively. The results showed that in the CCDD, the original model tested by the small sample set had an accuracy (Acc) of 82.78% and AUC of 0.882, while the Acc and AUC of the translated model were 85.69% and 0.893, respectively, so the difference was notable ( P < 0.05); the Acc of the original model and the translated model was 80.12% and 82.63%, respectively, in the large sample set, so the difference was obvious ( P < 0.05). In the MIT-BIH database, the Acc of normal (N) heart beat (HB) (99.38%) was higher than that of the atrial premature beat (APB) (87.45%) ( P < 0.05). In a word, applying the DCNN could improve the Acc of ECG for classification and recognition, so it could be well applied to ECG signal classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Joshi, Vaishali M., Deepthi D. Kulkarni und Nilesh J. Uke. „Stress and anxiety detection: deep learning and higher order statistic approach“. Indonesian Journal of Electrical Engineering and Computer Science 33, Nr. 3 (01.03.2024): 1567. http://dx.doi.org/10.11591/ijeecs.v33.i3.pp1567-1575.

Der volle Inhalt der Quelle
Annotation:
<p>Today's teenagers are dealing with anxiety and stress. Anxiety, depression, and suicide rates have increased in recent years because of increased social rivalry. The research is focused on detecting anxiety in students due to exam pressure to reduce the potential harm to a person's wellness. Research is performed on databases for anxious states based on psychological stimulation (DASPS) and our own database. The measured signal is divided into sub bands that correspond to the electroencephalogram (EEG) rhythms using the Butterworth sixth-order order filter. In higher dimensional space, the nonlinearities of each sub-band signal are analyzed using higher order statistics third-order cumulants (TOC). We have classified stress and anxiety using the support vector machine (SVM), K-nearest neighbor (K-NN), and deep learning bidirectional long short-term memory (BiLSTM) network. In comparison to previous techniques, the proposed system's performance using BiLSTM is quite good. The best accuracy in this analysis was 87% on the DASPS database and 98% on the own database. Finally, subjects with high stress levels had more gamma activity than subjects with little stress. This could be an important attribute in the classification of stress.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Sayed Farag, Mohamed, Mostafa Mohamed Mohie El Din und Hassan Ahmed Elshenbary. „Deep learning versus traditional methods for parking lots occupancy classification“. Indonesian Journal of Electrical Engineering and Computer Science 19, Nr. 2 (01.08.2020): 964. http://dx.doi.org/10.11591/ijeecs.v19.i2.pp964-973.

Der volle Inhalt der Quelle
Annotation:
<span>Due to the increase in number of cars and slow city developments, there is a need for smart parking system. One of the main issues in smart parking systems is parking lot occupancy status classification, so this paper introduce two methods for parking lot classification. The first method uses the mean, after converting the colored image to grayscale, then to black/white. If the mean is greater than a given threshold it is classified as occupied, otherwise it is empty. This method gave 90% correct classification rate on cnrall database. It overcome the alexnet deep learning method trained and tested on the same database (the mean method has no training time). The second method, which depends on deep learning is a deep learning neural network consists of 11 layers, trained and tested on the same database. It gave 93% correct classification rate, when trained on cnrall and tested on the same database. As shown, this method overcome the alexnet deep learning and the mean methods on the same database. On the Pklot database the alexnet and our deep learning network have a close resutls, overcome <br /> the mean method (greater than 95%).</span>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Najeeb, Shaima Miqdad Mohamed, Raid Rafi Omar Al-Nima und Mohand Lokman Ahmad Al-Dabag. „Reinforced Deep Learning for Verifying Finger Veins“. International Journal of Online and Biomedical Engineering (iJOE) 17, Nr. 07 (02.07.2021): 19. http://dx.doi.org/10.3991/ijoe.v17i07.24655.

Der volle Inhalt der Quelle
Annotation:
Recently, personal verifications become crucial demands for providing securities in personal accounts and financial activities. This paper suggests a new Deep Learning (DL) model called the Re-enforced Deep Learning (RDL). This approach provides another way of personal verification by using the Finger Veins (FVs). The RDL consists of multiple layers with a feedback. Two FV fingers are employed for each person, FV of the index finger for first personal verification and FV of the middle finger for re-enforced verification. The used database is from the Hong Kong Polytechnic University Finger Image (PolyUFI) database (Version 1.0). The result shows that the proposed RDL achieved a promising performance of 91.19%. Also, other DL approaches are exploited for comparisons in this study including state-of-the-art models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Ullah, Wusat, Imran Siddique, Rana Muhammad Zulqarnain, Mohammad Mahtab Alam, Irfan Ahmad und Usman Ahmad Raza. „Classification of Arrhythmia in Heartbeat Detection Using Deep Learning“. Computational Intelligence and Neuroscience 2021 (19.10.2021): 1–13. http://dx.doi.org/10.1155/2021/2195922.

Der volle Inhalt der Quelle
Annotation:
The electrocardiogram (ECG) is one of the most widely used diagnostic instruments in medicine and healthcare. Deep learning methods have shown promise in healthcare prediction challenges involving ECG data. This paper aims to apply deep learning techniques on the publicly available dataset to classify arrhythmia. We have used two kinds of the dataset in our research paper. One dataset is the MIT-BIH arrhythmia database, with a sampling frequency of 125 Hz with 1,09,446 ECG beats. The classes included in this first dataset are N, S, V, F, and Q. The second database is PTB Diagnostic ECG Database. The second database has two classes. The techniques used in these two datasets are the CNN model, CNN + LSTM, and CNN + LSTM + Attention Model. 80% of the data is used for the training, and the remaining 20% is used for testing. The result achieved by using these three techniques shows the accuracy of 99.12% for the CNN model, 99.3% for CNN + LSTM, and 99.29% for CNN + LSTM + Attention Model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Xiao, Qiao, Khuan Lee, Siti Aisah Mokhtar, Iskasymar Ismail, Ahmad Luqman bin Md Pauzi, Qiuxia Zhang und Poh Ying Lim. „Deep Learning-Based ECG Arrhythmia Classification: A Systematic Review“. Applied Sciences 13, Nr. 8 (14.04.2023): 4964. http://dx.doi.org/10.3390/app13084964.

Der volle Inhalt der Quelle
Annotation:
Deep learning (DL) has been introduced in automatic heart-abnormality classification using ECG signals, while its application in practical medical procedures is limited. A systematic review is performed from perspectives of the ECG database, preprocessing, DL methodology, evaluation paradigm, performance metric, and code availability to identify research trends, challenges, and opportunities for DL-based ECG arrhythmia classification. Specifically, 368 studies meeting the eligibility criteria are included. A total of 223 (61%) studies use MIT-BIH Arrhythmia Database to design DL models. A total of 138 (38%) studies considered removing noise or artifacts in ECG signals, and 102 (28%) studies performed data augmentation to extend the minority arrhythmia categories. Convolutional neural networks are the dominant models (58.7%, 216) used in the reviewed studies while growing studies have integrated multiple DL structures in recent years. A total of 319 (86.7%) and 38 (10.3%) studies explicitly mention their evaluation paradigms, i.e., intra- and inter-patient paradigms, respectively, where notable performance degradation is observed in the inter-patient paradigm. Compared to the overall accuracy, the average F1 score, sensitivity, and precision are significantly lower in the selected studies. To implement the DL-based ECG classification in real clinical scenarios, leveraging diverse ECG databases, designing advanced denoising and data augmentation techniques, integrating novel DL models, and deeper investigation in the inter-patient paradigm could be future research opportunities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Yao, Ge. „Application of Higher Education Management in Colleges and Universities by Deep Learning“. Computational Intelligence and Neuroscience 2022 (10.08.2022): 1–9. http://dx.doi.org/10.1155/2022/7295198.

Der volle Inhalt der Quelle
Annotation:
The development of artificial intelligence (AI) has brought great convenience to people and has been widely used in the field of education. To monitor the classroom status of college students in real time and achieve the purpose of balanced distribution of educational resources, the facial expression recognition (FER) algorithm is applied to the management of higher education in universities. Firstly, the convolutional neural network (CNN) is studied in depth, and secondly, the process and method of FER are explored in detail, and an adaptive FER algorithm based on the differential convolutional neural network (DCNN) is constructed. Finally, the algorithm is applied to the CK + database and the BU-4DFE database. The results manifest that the designed algorithm has an accuracy of 99.02% for keyframe detection in the CK + database and 98.35% for the BU-4DFE database. The algorithm has a high accuracy of keyframe detection for both expression databases. It has a good effect on the automatic detection of keyframes of expression sequences and can reach a level similar to that of manual frame selection. Compared with the existing algorithms, the proposed method still has higher advantages. It can effectively eliminate the interference of individual differences and environmental noise on FER. Experiments reveal that the proposed FER algorithm DCNN-based has a good recognition effect and is suitable for monitoring students’ classroom status. This research has certain reference significance for the application of AI in higher education management in colleges and universities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Tirado-Martin, Paloma, und Raul Sanchez-Reillo. „BioECG: Improving ECG Biometrics with Deep Learning and Enhanced Datasets“. Applied Sciences 11, Nr. 13 (24.06.2021): 5880. http://dx.doi.org/10.3390/app11135880.

Der volle Inhalt der Quelle
Annotation:
Nowadays, Deep Learning tools have been widely applied in biometrics. Electrocardiogram (ECG) biometrics is not the exception. However, the algorithm performances rely heavily on a representative dataset for training. ECGs suffer constant temporal variations, and it is even more relevant to collect databases that can represent these conditions. Nonetheless, the restriction in database publications obstructs further research on this topic. This work was developed with the help of a database that represents potential scenarios in biometric recognition as data was acquired in different days, physical activities and positions. The classification was implemented with a Deep Learning network, BioECG, avoiding complex and time-consuming signal transformations. An exhaustive tuning was completed including variations in enrollment length, improving ECG verification for more complex and realistic biometric conditions. Finally, this work studied one-day and two-days enrollments and their effects. Two-days enrollments resulted in huge general improvements even when verification was accomplished with more unstable signals. EER was improved in 63% when including a change of position, up to almost 99% when visits were in a different day and up to 91% if the user experienced a heartbeat increase after exercise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Rahul und Deepika Bansal. „Object Detection Using Machine Learning and Deep Learning“. International Journal for Research in Applied Science and Engineering Technology 11, Nr. 2 (28.02.2023): 265–68. http://dx.doi.org/10.22214/ijraset.2023.48958.

Der volle Inhalt der Quelle
Annotation:
Abstract: An object detection system finds objects of the real-world present either in a digital image or an object detection system locates real-world items that are present in a digital image or a video. These objects can be any type of object, such as people, automobiles, or other objects. A model database, a feature detector, a hypothesis, and a hypothesis verifier are the four components that the system must have in order to successfully detect an item in an image or video. This paper provides an overview of the many methods for object detection, localization, classification, feature extraction, appearance information extraction, and many other tasks in photos and videos. The remarks are derived from the literature that has been analyzed, and important concerns are also noted
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Sudars, K. „Face recognition Face2vec based on deep learning: Small database case“. Automatic Control and Computer Sciences 51, Nr. 1 (Januar 2017): 50–54. http://dx.doi.org/10.3103/s0146411617010072.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Zhang, Mingming, Zhigang Chen, Huiyu Wang, Zeng zeng und Xinwen Shan. „Research on Database Failure Prediction Based on Deep Learning Model“. IOP Conference Series: Materials Science and Engineering 452 (13.12.2018): 032056. http://dx.doi.org/10.1088/1757-899x/452/3/032056.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Ruescas Nicolau, A. V., E. Parrilla Bernabé, E. Medina Ripoll, D. Garrido Jaén und S. Alemany Mut. „Database generation for markerless tracking based on Deep Learning networks“. Gait & Posture 81 (September 2020): 308–9. http://dx.doi.org/10.1016/j.gaitpost.2020.08.050.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Wichmann, Andreas, Amgad Agoub, Valentina Schmidt und Martin Kada. „RoofN3D: A Database for 3D Building Reconstruction with Deep Learning“. Photogrammetric Engineering & Remote Sensing 85, Nr. 6 (01.06.2019): 435–43. http://dx.doi.org/10.14358/pers.85.6.435.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Li, Zhongliang, Yaofeng Tu und Zongmin Ma. „A Sample-Aware Database Tuning System With Deep Reinforcement Learning“. Journal of Database Management 35, Nr. 1 (09.11.2023): 1–25. http://dx.doi.org/10.4018/jdm.333519.

Der volle Inhalt der Quelle
Annotation:
Based on the relationship between client load and overall system performance, the authors propose a sample-aware deep deterministic policy gradient model. Specifically, they improve sample quality by filtering out sample noise caused by the fluctuations of client load, which accelerates the model convergence speed of the intelligent tuning system and improves the tuning effect. Also, the hardware resources and client load consumed by the database in the working process are added to the model for training. This can enhance the performance characterization ability of the model and improve the recommended parameters of the algorithm. Meanwhile, they propose an improved closed-loop distributed comprehensive training architecture of online and offline training to quickly obtain high-quality samples and improve the efficiency of parameter tuning. Experimental results show that the configuration parameters can make the performance of the database system better and shorten the tuning time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Wu, Zhaohui, Lu Jiang, Qinghua Zheng und Jun Liu. „Learning to Surface Deep Web Content“. Proceedings of the AAAI Conference on Artificial Intelligence 24, Nr. 1 (05.07.2010): 1967–68. http://dx.doi.org/10.1609/aaai.v24i1.7779.

Der volle Inhalt der Quelle
Annotation:
We propose a novel deep web crawling framework based on reinforcement learning. The crawler is regarded as an agent and deep web database as the environment. The agent perceives its current state and submits a selected action (query) to the environment according to Q-value. Based on the framework we develop an adaptive crawling method. Experimental results show that it outperforms the state of art methods in crawling capability and breaks through the assumption of full-text search implied by existing methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Rosado, Eduardo, Miguel Garcia-Remesal, Sergio Paraiso-Medina, Alejandro Pazos und Victor Maojo. „Using Machine Learning to Collect and Facilitate Remote Access to Biomedical Databases: Development of the Biomedical Database Inventory“. JMIR Medical Informatics 9, Nr. 2 (25.02.2021): e22976. http://dx.doi.org/10.2196/22976.

Der volle Inhalt der Quelle
Annotation:
Background Currently, existing biomedical literature repositories do not commonly provide users with specific means to locate and remotely access biomedical databases. Objective To address this issue, we developed the Biomedical Database Inventory (BiDI), a repository linking to biomedical databases automatically extracted from the scientific literature. BiDI provides an index of data resources and a path to access them seamlessly. Methods We designed an ensemble of deep learning methods to extract database mentions. To train the system, we annotated a set of 1242 articles that included mentions of database publications. Such a data set was used along with transfer learning techniques to train an ensemble of deep learning natural language processing models targeted at database publication detection. Results The system obtained an F1 score of 0.929 on database detection, showing high precision and recall values. When applying this model to the PubMed and PubMed Central databases, we identified over 10,000 unique databases. The ensemble model also extracted the weblinks to the reported databases and discarded irrelevant links. For the extraction of weblinks, the model achieved a cross-validated F1 score of 0.908. We show two use cases: one related to “omics” and the other related to the COVID-19 pandemic. Conclusions BiDI enables access to biomedical resources over the internet and facilitates data-driven research and other scientific initiatives. The repository is openly available online and will be regularly updated with an automatic text processing pipeline. The approach can be reused to create repositories of different types (ie, biomedical and others).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Jirakrit, Leelarungrayub, Yankai Araya und Thipcharoen Supattanawaree. „Knowledge Discovery on Artificial Intelligence and Physical Therapy: Document Mining Analysis“. IgMin Research 2, Nr. 11 (21.11.2024): 929–37. http://dx.doi.org/10.61927/igmin270.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) is the simulation of human intelligence and benchmarks in Physical Therapy (PT). Therefore, the updated knowledge derived from large databases is highly engaging. Data Mining (DM) analysis from a big database related to “AI” and “PT” was the aim for the co-occurrence of words, network clusters, and trends under the Knowledge Discovery in Databases (KDD). The terms “AI” and “PT” were cited from a big database in the SCOPUS. The co-occurrence, network clustering, and trend were computer-analyzed with a Bibliometric tool. Between 1993 and 2024, 174 documents were published, revealing the most frequently used terms related to AI, human, PT, physical modalities, machine learning, physical treatment, deep learning, patient rehabilitation, robotics, virtual reality, algorithms, telerehabilitation, ergonomics, exercise, quality of life, and other related topics. Five network clusters were discovered as; (1) AI, decision support systems, health care, human-computer interaction, intelligent robots, learning algorithms, neuromuscular, stroke, and patient rehabilitation, PT, robotics, etc., respectively, (2) aged, algorithms, biomechanics, exercise, exercise therapy, female, humans, machine learning, middle-aged, PT modalities, rehabilitation, treatment outcome, deep learning, etc., (3) deep learning, diagnosis, and quality of life, (4) review and systematic review, and (5) clinical practice. From 2008 to 2024, a trend emerged in the fields of algorithms, computer-assisted diagnosis, treatment planning, classification, equipment design, signal processing, AI, exercise, physical and patient rehabilitation, robotics, virtual reality, machine learning, deep learning, clinical practice, etc. Discovered knowledge of AI with PT related to different machine learning for use in clinical practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Ibrahim, Haneen Siraj, Narjis Mezaal Shati und AbdulRahman A. Alsewari. „A Transfer Learning Approach for Arabic Image Captions“. Al-Mustansiriyah Journal of Science 35, Nr. 3 (30.09.2024): 81–90. http://dx.doi.org/10.23851/mjs.v35i3.1485.

Der volle Inhalt der Quelle
Annotation:
Background: Arabic image captioning (AIC) is the automatic generation of text descriptions in the Arabic language for images. Applies a transfer learning approach in deep learning to enhance computer vision and natural language processing. There are many datasets in English reverse other languages. Instead of, the Arabs researchers unanimously agreed that there is a lack of Arabic databases available in this field. Objective: This paper presents the improvement and processing of the available Arabic textual database using Google spreadsheets for translation and creation of AR. Flicker8k2023 dataset is an extension of the Arabic Flicker8k dataset available, it was uploaded to GitHub and made public for researches. Methods: An efficient model proposed using deep learning techniques by including two pre-training models (VGG16 and VGG19), to extract features from the images and build (LSTM and GRU) models to process textual prediction sequence. In addition to the effect of pre-processing the text in Arabic. Results: The adopted model outperforms better compared to the previous study in BLEU-1 from 33 to 40. Conclusions: This paper concluded that the biggest problem is the database available in the Arabic language. This paper has worked to increase the size of the text database from 24,276 to 32,364 thousand captions, where each image contains 4 captions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Zhang, Lan, Yu Feng Nie und Zhen Hai Wang. „Image De-Noising Using Deep Learning“. Applied Mechanics and Materials 641-642 (September 2014): 1287–90. http://dx.doi.org/10.4028/www.scientific.net/amm.641-642.1287.

Der volle Inhalt der Quelle
Annotation:
Deep neural network as a part of deep learning algorithm is a state-of-the-art approach to find higher level representations of input data which has been introduced to many practical and challenging learning problems successfully. The primary goal of deep learning is to use large data to help solving a given task on machine learning. We propose an methodology for image de-noising project defined by this model and conduct training a large image database to get the experimental output. The result shows the robustness and efficient our our algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Syed Qamrul Kazmi, Et al. „Image Retrieval Using Auto Encoding Features In Deep Learning“. International Journal on Recent and Innovation Trends in Computing and Communication 11, Nr. 10 (02.11.2023): 155–71. http://dx.doi.org/10.17762/ijritcc.v11i10.8478.

Der volle Inhalt der Quelle
Annotation:
The latest technologies and growth in availability of image storage in day to day life has made a vast storage place for the images in the database. Several devices which help in capturing the image contribute to a huge repository of images. Keeping in mind the daily input in the database, one must think of retrieving those images according to certain criteria mentioned. Several techniques such as shape of the object, Discrete Wavelet transform (DWT), texture features etc. were used in determining the type of image and classifying them. Segmentation also plays a vital role in image retrieval but the robustness is lacking in most of the cases. The process of retrieval mainly depends on the special characteristics possessed by an image rather than the whole image. Two types of image retrieval can be seen. One with a general object and the other which may be specific to some type of application. Modern deep neural networks for unsupervised feature learning like Deep Autoencoder (AE) learn embedded representations by stacking layers on top of each other. These learnt embedded-representations, however, may degrade as the AE network deepens due to vanishing gradient, resulting in decreased performance. We have introduced here the ResNet Autoencoder (RAE) and its convolutional version (C-RAE) for unsupervised feature based learning. The proposed model is tested on three distinct databases Corel1K, Cifar-10, Cifar-100 which differ in size. The presented algorithm have significantly reduced computation time and provided very high image retrieval levels of accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Zhu, Ge und Liu. „Deep Learning-Based Classification of Weld Surface Defects“. Applied Sciences 9, Nr. 16 (12.08.2019): 3312. http://dx.doi.org/10.3390/app9163312.

Der volle Inhalt der Quelle
Annotation:
In order to realize the non-destructive intelligent identification of weld surface defects, an intelligent recognition method based on deep learning is proposed, which is mainly formed by convolutional neural network (CNN) and forest random. First, the high-level features are automatically learned through the CNN. Random forest is trained with extracted high-level features to predict the classification results. Secondly, the weld surface defects images are collected and preprocessed by image enhancement and threshold segmentation. A database of weld surface defects is established using pre-processed images. Finally, comparative experiments are performed on the weld surface defects database. The results show that the accuracy of the method combined with CNN and random forest can reach 0.9875, and it also demonstrates the method is effective and practical.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Barreto, Fabian, Jignesh Sarvaiya und Suprava Patnaik. „Learning Representations for Face Recognition: A Review from Holistic to Deep Learning“. Advances in Technology Innovation 7, Nr. 4 (05.08.2022): 279–94. http://dx.doi.org/10.46604/aiti.2022.8308.

Der volle Inhalt der Quelle
Annotation:
For decades, researchers have investigated how to recognize facial images. This study reviews the development of different face recognition (FR) methods, namely, holistic learning, handcrafted local feature learning, shallow learning, and deep learning (DL). With the development of methods, the accuracy of recognizing faces in the labeled faces in the wild (LFW) database has been increased. The accuracy of holistic learning is 60%, that of handcrafted local feature learning increases to 70%, and that of shallow learning is 86%. Finally, DL achieves human-level performance (97% accuracy). This enhanced accuracy is caused by large datasets and graphics processing units (GPUs) with massively parallel processing capabilities. Furthermore, FR challenges and current research studies are discussed to understand future research directions. The results of this study show that presently the database of labeled faces in the wild has reached 99.85% accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Zhou, Xiao Qing, und Xiao Ping Tang. „A Kind of Web Database Classification Based on Machine Learning“. Applied Mechanics and Materials 50-51 (Februar 2011): 644–48. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.644.

Der volle Inhalt der Quelle
Annotation:
The traditional search engine is unable to correct search for the magnanimous information in Deep Web hides. The Web database's classification is the key step which integrates with the Web database classification and retrieves. This article has proposed one kind of classification based on machine learning's web database. The experiment has indicated that after this taxonomic approach undergoes few sample training, it can achieve the very good classified effect, and along with training sample's increase, this classifier's performance maintains stable and the rate of accuracy and the recalling rate fluctuate in the very small scope.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie