Статті в журналах з теми "VGG16 MODEL"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: VGG16 MODEL.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "VGG16 MODEL".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Sukegawa, Shintaro, Kazumasa Yoshii, Takeshi Hara, Katsusuke Yamashita, Keisuke Nakano, Norio Yamamoto, Hitoshi Nagatsuka, and Yoshihiko Furuki. "Deep Neural Networks for Dental Implant System Classification." Biomolecules 10, no. 7 (July 1, 2020): 984. http://dx.doi.org/10.3390/biom10070984.

Повний текст джерела
Анотація:
In this study, we used panoramic X-ray images to classify and clarify the accuracy of different dental implant brands via deep convolutional neural networks (CNNs) with transfer-learning strategies. For objective labeling, 8859 implant images of 11 implant systems were used from digital panoramic radiographs obtained from patients who underwent dental implant treatment at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2019. Five deep CNN models (specifically, a basic CNN with three convolutional layers, VGG16 and VGG19 transfer-learning models, and finely tuned VGG16 and VGG19) were evaluated for implant classification. Among the five models, the finely tuned VGG16 model exhibited the highest implant classification performance. The finely tuned VGG19 was second best, followed by the normal transfer-learning VGG16. We confirmed that the finely tuned VGG16 and VGG19 CNNs could accurately classify dental implant systems from 11 types of panoramic X-ray images.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kumar, Vijay, Anis Zarrad, Rahul Gupta, and Omar Cheikhrouhou. "COV-DLS: Prediction of COVID-19 from X-Rays Using Enhanced Deep Transfer Learning Techniques." Journal of Healthcare Engineering 2022 (April 11, 2022): 1–13. http://dx.doi.org/10.1155/2022/6216273.

Повний текст джерела
Анотація:
In this paper, modifications in neoteric architectures such as VGG16, VGG19, ResNet50, and InceptionV3 are proposed for the classification of COVID-19 using chest X-rays. The proposed architectures termed “COV-DLS” consist of two phases: heading model construction and classification. The heading model construction phase utilizes four modified deep learning architectures, namely Modified-VGG16, Modified-VGG19, Modified-ResNet50, and Modified-InceptionV3. An attempt is made to modify these neoteric architectures by incorporating the average pooling and dense layers. The dropout layer is also added to prevent the overfitting problem. Two dense layers with different activation functions are also added. Thereafter, the output of these modified models is applied during the classification phase, when COV-DLS are applied on a COVID-19 chest X-ray image data set. Classification accuracy of 98.61% is achieved by Modified-VGG16, 97.22% by Modified-VGG19, 95.13% by Modified-ResNet50, and 99.31% by Modified-InceptionV3. COV-DLS outperforms existing deep learning models in terms of accuracy and F1-score.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lai, Ren Yu, Kim Gaik Tay, Audrey Huong, Chang Choon Chew, and Shuhaida Ismail. "Dorsal hand Vein Authentication System Using Convolution Neural Network." International Journal of Emerging Technology and Advanced Engineering 12, no. 8 (August 2, 2022): 83–90. http://dx.doi.org/10.46338/ijetae0822_11.

Повний текст джерела
Анотація:
This study proposes using a dorsal hand vein authentication system using transfer learning from convolutional neural network models of VGG16 and VGG19. The required images were obtained from Bosphorus Hand Vein Database. Among the 100 users, the first 80 users were treated as registered users, while the remaining users as unregistered users. 960 left-hand images of the registered users were trained during the training phase. Meanwhile, 100 images, consisting of 80 registered and 20 unregistered users, were randomly selected for the authentication application testing. Our results showed that VGG19 produced a superior validation accuracy as compared to that of VGG16 given by 96.9% and 94.3%, respectively. The testing accuracy of VGG16 and VGG19 is 99% and 100%, respectively. Since VGG19 is shown to outperform its shallower counterpart, we implemented a User Interface (UI) based on VGG19 model for dorsal hand vein identification. These findings indicate that our system may be deployed for biometric authentication in the future for a more efficient and secure implementation of person identification and imposter detection
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bodavarapu, Pavan Nageswar Reddy, and P. V. V. S. Srinivas. "Facial expression recognition for low resolution images using convolutional neural networks and denoising techniques." Indian Journal of Science and Technology 14, no. 12 (March 27, 2021): 971–83. http://dx.doi.org/10.17485/ijst/v14i12.14.

Повний текст джерела
Анотація:
Background/Objectives: There is only limited research work is going on in the field of facial expression recognition on low resolution images. Mostly, all the images in the real world will be in low resolution and might also contain noise, so this study is to design a novel convolutional neural network model (FERConvNet), which can perform better on low resolution images. Methods: We proposed a model and then compared with state-of-art models on FER2013 dataset. There is no publicly available dataset, which contains low resolution images for facial expression recognition (Anger, Sad, Disgust, Happy, Surprise, Neutral, Fear), so we created a Low Resolution Facial Expression (LRFE) dataset, which contains more than 6000 images of seven types of facial expressions. The existing FER2013 dataset and LRFE dataset were used. These datasets were divided in the ratio 80:20 for training and testing and validation purpose. A HDM is proposed, which is a combination of Gaussian Filter, Bilateral Filter and Non local means denoising Filter. This hybrid denoising method helps us to increase the performance of the convolutional neural network. The proposed model was then compared with VGG16 and VGG19 models. Findings: The experimental results show that the proposed FERConvNet_HDM approach is effective than VGG16 and VGG19 in facial expression recognition on both FER2013 and LRFE dataset. The proposed FERConvNet_HDM approach achieved 85% accuracy on Fer2013 dataset, outperforming the VGG16 and VGG19 models, whose accuracies are 60% and 53% on Fer2013 dataset respectively. The same FERConvNet_HDM approach when applied on LRFE dataset achieved 95% accuracy. After analyzing the results, our FERConvNet_HDM approach performs better than VGG16 and VGG19 on both Fer2013 and LRFE dataset. Novelty/Applications: HDM with convolutional neural networks, helps in increasing the performance of convolutional neural networks in Facial expression recognition. Keywords: Facial expression recognition; facial emotion; convolutional neural network; deep learning; computer vision
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shinde, Krishna K., and C. N. Kayte. "Fingerprint Recognition Based on Deep Learning Pre-Train with Our Best CNN Model for Person Identification." ECS Transactions 107, no. 1 (April 24, 2022): 2209–20. http://dx.doi.org/10.1149/10701.2209ecst.

Повний текст джерела
Анотація:
In this article, we use pre-train VGG16, VGG19, and ResNet50 with ImageNet wights and our best CNN model to identify human fingerprint patterns. The system including pre-processing phase where the input fingerprint images first technique apply cropping and normalize for unwanted part remove of fingerprint images and normalize its dimension, second Image Enhancement for removing noise in to ridgelines, and last Canny Edge Detection technique for adjustment to smooth image with Gaussian to remove noise. Then apply one by one model on KVKR fingerprint dataset. Our best CNN model has automatically extracted features and RMSprop optimizer use for classification this features. This study performing experimental work of each pre-processed dataset and testing these three models with different dataset size of input train, test, and validation data. The VGG16 model got a better recognition accuracy than VGG19 and ResNet50 models.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Athavale, Vijay Anant, Suresh Chand Gupta, Deepak Kumar, and Savita. "Human Action Recognition Using CNN-SVM Model." Advances in Science and Technology 105 (April 2021): 282–90. http://dx.doi.org/10.4028/www.scientific.net/ast.105.282.

Повний текст джерела
Анотація:
In this paper, a pre-trained CNN model VGG16 with the SVM classifier is presented for the HAR task. The deep features are learned via the VGG16 pre-trained CNN model. The VGG 16 network is previously used for the image classification task. We used VGG16 for the signal classification of human activity, which is recorded by the accelerometer sensor of the mobile phone. The UniMiB dataset contains the 11771 samples of the daily life activity of humans. A Smartphone records these samples through the accelerometer sensor. The features are learned via the fifth max-pooling layer of the VGG16 CNN model and feed to the SVM classifier. The SVM classifier replaced the fully connected layer of the VGG16 model. The proposed VGG16-SVM model achieves effective and efficient results. The proposed method of VGG16-SVM is compared with the previously used schemes. The classification accuracy and F-Score are the evaluation parameters, and the proposed method provided 79.55% accuracy and 71.63% F-Score.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ko, Kyung-Kyu, and Eun-Sung Jung. "Improving Air Pollution Prediction System through Multimodal Deep Learning Model Optimization." Applied Sciences 12, no. 20 (October 15, 2022): 10405. http://dx.doi.org/10.3390/app122010405.

Повний текст джерела
Анотація:
Many forms of air pollution increase as science and technology rapidly advance. In particular, fine dust harms the human body, causing or worsening heart and lung-related diseases. In this study, the level of fine dust in Seoul after 8 h is predicted to prevent health damage in advance. We construct a dataset by combining two modalities (i.e., numerical and image data) for accurate prediction. In addition, we propose a multimodal deep learning model combining a Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN). An LSTM AutoEncoder is chosen as a model for numerical time series data processing and basic CNN. A Visual Geometry Group Neural Network (VGGNet) (VGG16, VGG19) is also chosen as a CNN model for image processing to compare performance differences according to network depth. The VGGNet is a standard deep CNN architecture with multiple layers. Our multimodal deep learning model using two modalities (i.e., numerical and image data) showed better performance than a single deep learning model using only one modality (numerical data). Specifically, the performance improved up to 14.16% when the VGG19 model, which has a deeper network, was used rather than the VGG16 model.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hasan, Moh Arie, Yan Riyanto, and Dwiza Riana. "Grape leaf image disease classification using CNN-VGG16 model." Jurnal Teknologi dan Sistem Komputer 9, no. 4 (July 5, 2021): 218–23. http://dx.doi.org/10.14710/jtsiskom.2021.14013.

Повний текст джерела
Анотація:
This study aims to classify the disease image on grape leaves using image processing. The segmentation uses the k-means clustering algorithm, the feature extraction process uses the VGG16 transfer learning technique, and the classification uses CNN. The dataset is from Kaggle of 4000 grape leaf images for four classes: leaves with black measles, leaf spot, healthy leaf, and blight. Google images of 100 pieces were also used as test data outside the dataset. The accuracy of the CNN model training is 99.50 %. The classification yields an accuracy of 97.25 % using the test data, while using test image data outside the dataset obtains an accuracy of 95 %. The designed image processing method can be applied to identify and classify disease images on grape leaves.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Singh, Tajinder Pal, Sheifali Gupta, Meenu Garg, Amit Verma, V. V. Hung, H. H. Thien, and Md Khairul Islam. "Transfer and Deep Learning-Based Gurmukhi Handwritten Word Classification Model." Mathematical Problems in Engineering 2023 (May 3, 2023): 1–20. http://dx.doi.org/10.1155/2023/4768630.

Повний текст джерела
Анотація:
The world is having a vast collection of text with abandon of knowledge. However, it is a difficult and time-taking process to manually read and recognize the text written in numerous regional scripts. The task becomes more critical with Gurmukhi script due to complex structure of characters motivated from the challenges in designing an error-free and accurate classification model of Gurmukhi characters. In this paper, the author has customized the convolutional neural network model to classify handwritten Gurmukhi words. Furthermore, dataset has been prepared with 24000 handwritten Gurmukhi word images with 12 classes representing the month’s names. The dataset has been collected from 500 users of heterogeneous profession and age group. The dataset has been simulated using the proposed CNN model as well as various pretrained models named as ResNet 50, VGG19, and VGG16 at 100 epochs and 40 batch sizes. The proposed CNN model has obtained the best accuracy value of 0.9973, whereas the ResNet50 model has obtained the accuracy of 0.4015, VGG19 has obtained the accuracy of 0.7758, and the VGG16 model has obtained value accuracy of 0.8056. With the current accuracy rate, noncomplex architectural pattern, and prowess gained through learning using different writing styles, the proposed CNN model will be of great benefit to the researchers working in this area to use it in other ImageNet-based classification problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Shakhovska, Nataliya, and Pavlo Pukach. "Comparative Analysis of Backbone Networks for Deep Knee MRI Classification Models." Big Data and Cognitive Computing 6, no. 3 (June 21, 2022): 69. http://dx.doi.org/10.3390/bdcc6030069.

Повний текст джерела
Анотація:
This paper focuses on different types of backbone networks for machine learning architectures which perform classification of knee Magnetic Resonance Imaging (MRI) images. This paper aims to compare different types of feature extraction networks for the same classification task, in terms of accuracy and performance. Multiple variations of machine learning models were trained based on the MRNet architecture, choosing AlexNet, ResNet, VGG-11, VGG-16, and Efficientnet as the backbone. The models were evaluated on the MRNet validation dataset, computing Area Under the Receiver Operating Characteristics Curve (ROC-AUC), accuracy, f1 score, and Cohen’s Kappa as evaluation metrics. The MRNet-VGG16 model variant shows the best results for Anterior Cruciate Ligament (ACL) tear detection. For general abnormality detection, MRNet-VGG16 is dominated by MRNet-Resnet in confidence between 0.5 and 0.75 and by MRNet-VGG11 for confidence more than 0.8. Due to the non-uniform nature of backbone network performance on different MRI planes, it is advisable to use an LR ensemble of: VGG16 on a coronal plane for all classification tasks; on an axial plane for abnormality and ACL tear detection; Alexnet on a sagittal plane for abnormality detection, and an axial plane for meniscal tear detection; and VGG11 on a sagittal plane for ACL tear detection. The results also indicate that the Cohen’s Kappa metric is valuable in model evaluation for the MRNet dataset, as it provides deeper insights on classification decisions.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Priyatama, Adhigana, Zamah Sari, and Yufis Azhar. "Deep Learning Implementation using Convolutional Neural Network for Alzheimer’s Classification." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 7, no. 2 (March 26, 2023): 310–217. http://dx.doi.org/10.29207/resti.v7i2.4707.

Повний текст джерела
Анотація:
Alzheimer's disease is the most common cause of dementia. Dementia refers to brain symptoms such as memory loss, difficulty thinking and problem solving and even speaking. This stage of development of neuropsychiatric symptoms is usually examined using magnetic resonance images (MRI) of the brain. The detection of Alzheimer's disease from data such as MRI using machine learning has been the subject of research in recent years. This technology has facilitated the work of medical experts and accelerated the medical process. In this study we target the classification of Alzheimer's disease images using convolutional neural network (CNN) and transfer learning (VGG16 and VGG19). The objective of this study is to classify Alzheimer's disease images into four classes that are recognized by medical experts and the results of this study are several evaluation metrics. Through experiments conducted on the dataset, this research has proven that the algorithm used is able to classify MRI of Alzheimer's disease into four classes known to medical experts. The accuracy of the first CNN model is 75.01%, the second VGG16 model is 80.10% and the third VGG19 model is 80.28%.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Aytekin, Alper, and Vasfiye Mençik. "Detection of Driver Dynamics with VGG16 Model." Applied Computer Systems 27, no. 1 (June 1, 2022): 83–88. http://dx.doi.org/10.2478/acss-2022-0009.

Повний текст джерела
Анотація:
Abstract One of the most important factors triggering the occurrence of traffic accidents is that drivers continue to drive in a tired and drowsy state. It is a great opportunity to regularly control the dynamics of the driver with transfer learning methods while driving, and to warn the driver in case of possible drowsiness and to focus their attention in order to prevent traffic accidents due to drowsiness. A classification study was carried out with the aim of detecting the drowsiness of the driver by the position of the eyelids and the presence of yawning movement using the Convolutional Neural Network (CNN) architecture. The dataset used in the study includes the face shapes of drivers of different genders and different ages while driving. Accuracy and F1-score parameters were used for experimental studies. The results achieved are 91 % accuracy for the VGG16 model and an F1-score of over 90 % for each class.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Desai, Chitra. "Image Classification Using Transfer Learning and Deep Learning." International Journal of Engineering and Computer Science 10, no. 9 (September 23, 2021): 25394–98. http://dx.doi.org/10.18535/ijecs/v10i9.4622.

Повний текст джерела
Анотація:
Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used. This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Singh, Dilbag, Yavuz Selim Taspinar, Ramazan Kursun, Ilkay Cinar, Murat Koklu, Ilker Ali Ozkan, and Heung-No Lee. "Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models." Electronics 11, no. 7 (March 22, 2022): 981. http://dx.doi.org/10.3390/electronics11070981.

Повний текст джерела
Анотація:
Pistachio is a shelled fruit from the anacardiaceae family. The homeland of pistachio is the Middle East. The Kirmizi pistachios and Siirt pistachios are the major types grown and exported in Turkey. Since the prices, tastes, and nutritional values of these types differs, the type of pistachio becomes important when it comes to trade. This study aims to identify these two types of pistachios, which are frequently grown in Turkey, by classifying them via convolutional neural networks. Within the scope of the study, images of Kirmizi and Siirt pistachio types were obtained through the computer vision system. The pre-trained dataset includes a total of 2148 images, 1232 of Kirmizi type and 916 of Siirt type. Three different convolutional neural network models were used to classify these images. Models were trained by using the transfer learning method, with AlexNet and the pre-trained models VGG16 and VGG19. The dataset is divided as 80% training and 20% test. As a result of the performed classifications, the success rates obtained from the AlexNet, VGG16, and VGG19 models are 94.42%, 98.84%, and 98.14%, respectively. Models’ performances were evaluated through sensitivity, specificity, precision, and F-1 score metrics. In addition, ROC curves and AUC values were used in the performance evaluation. The highest classification success was achieved with the VGG16 model. The obtained results reveal that these methods can be used successfully in the determination of pistachio types.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Noprisson, Handrie. "Fine-Tuning Model Transfer Learning VGG16 Untuk Klasifikasi Citra Penyakit Tanaman Padi." JSAI (Journal Scientific and Applied Informatics) 5, no. 3 (November 26, 2022): 244–49. http://dx.doi.org/10.36085/jsai.v5i3.3609.

Повний текст джерела
Анотація:
Besarnya tingkat permintaan atas komoditas ini mendorong adanya perbaikan hasil pertanian dengan cara mengatasi penyakit pada tanaman padi. Deteksi penyakit pada tanaman padi sejak awal penanaman akan mengurangi dampak pertumbuhan tanaman cukup signifikan. Dengan adanya perawatan yang tepat dari hasil identifikasi kasus penyakit sejak ini akan menambah produktivitas hasil pertanian. Penelitian ini bertujuan membuat analisis kinerja klasifikasi penyakit tanaman padi convolution neural net (CNN) dengan arsitektur VGG16 menggunakan fine-tuning. Untuk meproses dataset dan mengelompokkan data menjadi empat kelas (BrownSpoty, Healthy, Hispa, dan LeafBlast), penelitian ini menggunakan beberapa tahapan metodologi. Tahapanya antara lain data preparation, feature extraction, training, comparing dan evaluating model. Sebagai hasil, VGG16 without fine tuning mendapatkan akurasi 50.88% sedangkan VGG 16 with fine tuning mendapatkan akurasi 63.50% pada proses traning. Pada proses validasi, VGG16 without fine tuning mendapatkan akurasi 52.50% sedangkan VGG 16 with fine tuning mendapatkan akurasi 62.08%. Pada proses testing, VGG16 without fine tuning mendapatkan akurasi 54.19% sedangkan VGG16 with fine tuning mendapatkan akurasi 62.21%.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hwang, Jung, Jae Seo, Jeong Kim, Suyoung Park, Young Kim, and Kwang Kim. "Comparison between Deep Learning and Conventional Machine Learning in Classifying Iliofemoral Deep Venous Thrombosis upon CT Venography." Diagnostics 12, no. 2 (January 21, 2022): 274. http://dx.doi.org/10.3390/diagnostics12020274.

Повний текст джерела
Анотація:
In this study, we aimed to investigate quantitative differences in performance in terms of comparing the automated classification of deep vein thrombosis (DVT) using two categories of artificial intelligence algorithms: deep learning based on convolutional neural networks (CNNs) and conventional machine learning. We retrospectively enrolled 659 participants (DVT patients, 282; normal controls, 377) who were evaluated using contrast-enhanced lower extremity computed tomography (CT) venography. Conventional machine learning consists of logistic regression (LR), support vector machines (SVM), random forests (RF), and extreme gradient boosts (XGB). Deep learning based on CNN included the VGG16, VGG19, Resnet50, and Resnet152 models. According to the mean generated AUC values, we found that the CNN-based VGG16 model showed a 0.007 higher performance (0.982 ± 0.014) as compared with the XGB model (0.975 ± 0.010), which showed the highest performance among the conventional machine learning models. In the conventional machine learning-based classifications, we found that the radiomic features presenting a statistically significant effect were median values and skewness. We found that the VGG16 model within the deep learning algorithm distinguished deep vein thrombosis on CT images most accurately, with slightly higher AUC values as compared with the other AI algorithms used in this study. Our results guide research directions and medical practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wu, Shengbin. "Expression Recognition Method Using Improved VGG16 Network Model in Robot Interaction." Journal of Robotics 2021 (December 20, 2021): 1–9. http://dx.doi.org/10.1155/2021/9326695.

Повний текст джерела
Анотація:
Aiming at the problems of poor representation ability and less feature data when traditional expression recognition methods are applied to intelligent applications, an expression recognition method based on improved VGG16 network is proposed. Firstly, the VGG16 network is improved by using large convolution kernel instead of small convolution kernel and reducing some fully connected layers to reduce the complexity and parameters of the model. Then, the high-dimensional abstract feature data output by the improved VGG16 is input into the convolution neural network (CNN) for training, so as to output the expression types with high accuracy. Finally, the expression recognition method combined with the improved VGG16 and CNN model is applied to the human-computer interaction of the NAO robot. The robot makes different interactive actions according to different expressions. The experimental results based on CK + dataset show that the improved VGG16 network has strong supervised learning ability. It can extract features well for different expression types, and its overall recognition accuracy is close to 90%. Through multiple tests, the interactive results show that the robot can stably recognize emotions and make corresponding action interactions.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Khan, Inam Ullah, Mohaimenul Azam Khan Raiaan, Kaniz Fatema, Sami Azam, Rafi ur Rashid, Saddam Hossain Mukta, Mirjam Jonkman, and Friso De Boer. "A Computer-Aided Diagnostic System to Identify Diabetic Retinopathy, Utilizing a Modified Compact Convolutional Transformer and Low-Resolution Images to Reduce Computation Time." Biomedicines 11, no. 6 (May 28, 2023): 1566. http://dx.doi.org/10.3390/biomedicines11061566.

Повний текст джерела
Анотація:
Diabetic retinopathy (DR) is the foremost cause of blindness in people with diabetes worldwide, and early diagnosis is essential for effective treatment. Unfortunately, the present DR screening method requires the skill of ophthalmologists and is time-consuming. In this study, we present an automated system for DR severity classification employing the fine-tuned Compact Convolutional Transformer (CCT) model to overcome these issues. We assembled five datasets to generate a more extensive dataset containing 53,185 raw images. Various image pre-processing techniques and 12 types of augmentation procedures were applied to improve image quality and create a massive dataset. A new DR-CCTNet model is proposed. It is a modification of the original CCT model to address training time concerns and work with a large amount of data. Our proposed model delivers excellent accuracy even with low-pixel images and still has strong performance with fewer images, indicating that the model is robust. We compare our model’s performance with transfer learning models such as VGG19, VGG16, MobileNetV2, and ResNet50. The test accuracy of the VGG19, ResNet50, VGG16, and MobileNetV2 were, respectively, 72.88%, 76.67%, 73.22%, and 71.98%. Our proposed DR-CCTNet model to classify DR outperformed all of these with a 90.17% test accuracy. This approach provides a novel and efficient method for the detection of DR, which may lower the burden on ophthalmologists and expedite treatment for patients.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Jiang, Zhi-Peng, Yi-Yang Liu, Zhen-En Shao, and Ko-Wei Huang. "An Improved VGG16 Model for Pneumonia Image Classification." Applied Sciences 11, no. 23 (November 25, 2021): 11185. http://dx.doi.org/10.3390/app112311185.

Повний текст джерела
Анотація:
Image recognition has been applied to many fields, but it is relatively rarely applied to medical images. Recent significant deep learning progress for image recognition has raised strong research interest in medical image recognition. First of all, we found the prediction result using the VGG16 model on failed pneumonia X-ray images. Thus, this paper proposes IVGG13 (Improved Visual Geometry Group-13), a modified VGG16 model for classification pneumonia X-rays images. Open-source thoracic X-ray images acquired from the Kaggle platform were employed for pneumonia recognition, but only a few data were obtained, and datasets were unbalanced after classification, either of which can result in extremely poor recognition from trained neural network models. Therefore, we applied augmentation pre-processing to compensate for low data volume and poorly balanced datasets. The original datasets without data augmentation were trained using the proposed and some well-known convolutional neural networks, such as LeNet AlexNet, GoogLeNet and VGG16. In the experimental results, the recognition rates and other evaluation criteria, such as precision, recall and f-measure, were evaluated for each model. This process was repeated for augmented and balanced datasets, with greatly improved metrics such as precision, recall and F1-measure. The proposed IVGG13 model produced superior outcomes with the F1-measure compared with the current best practice convolutional neural networks for medical image recognition, confirming data augmentation effectively improved model accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Buvana, M., K. Muthumayil, S. Senthil kumar, Jamel Nebhen, Sultan S. Alshamrani, and Ihsan Ali. "Deep Optimal VGG16 Based COVID-19 Diagnosis Model." Computers, Materials & Continua 70, no. 1 (2022): 43–58. http://dx.doi.org/10.32604/cmc.2022.019331.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Salamah, Umniy, Anita Ratnasari, and Sarwati Rahayu. "Automated Fruit Classification Menggunakan Model VGG16 dan MobileNetV2." JSAI (Journal Scientific and Applied Informatics) 5, no. 3 (November 26, 2022): 176–81. http://dx.doi.org/10.36085/jsai.v5i3.3615.

Повний текст джерела
Анотація:
Pengembangan robot atau mesin untuk membantu kegiatan pertanian memerlukan riset yang panjang. Teknologi tersebut harus dapat memiliki keahlian dalam melakukan berbagai macam aktivitas dan mampu mendeteksi objek yang menjadi sasaran pekerjaannya. Untuk memenuhi hal ini, riset untuk mendeteksi objek pertanian, misalnya buah, menjadi salah satu agenda riset yang perlu dilakukan dan dikembangkan. Tujuan penelitian ini adalah untuk mengetahui hasil perbandingan performa deep learning yaitu VGG16 dan MobileNetV2 untuk fruit classification. Penelitian ini menggunakan dataset dengan jumlah total 90.483 data dengan ukuran gambar 100x100 piksel dan jumlah kelas tanaman buah yang akan diklasifikasi adalah sebanyak 131 kelas. Pada proses testing menggunakan dataset yang ada, MobileNetV2 mendapatkan akurasi 98.4% dan ResNet50 mendapatkan akurasi 99,2%.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Choi, Wansuk, and Seoyoon Heo. "Deep Learning Approaches to Automated Video Classification of Upper Limb Tension Test." Healthcare 9, no. 11 (November 18, 2021): 1579. http://dx.doi.org/10.3390/healthcare9111579.

Повний текст джерела
Анотація:
The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Karac, Abdulkadir. "Predicting COVID-19 Cases on a Large Chest X-Ray Dataset Using Modified Pre-trained CNN Architectures." Applied Computer Systems 28, no. 1 (June 1, 2023): 44–57. http://dx.doi.org/10.2478/acss-2023-0005.

Повний текст джерела
Анотація:
Abstract The Coronavirus is a virus that spreads very quickly. Therefore, it has had very destructive effects in many areas worldwide. Because X-ray images are an easily accessible, fast, and inexpensive method, they are widely used worldwide to diagnose COVID-19. This study tried detecting COVID-19 from X-ray images using pre-trained VGG16, VGG19, InceptionV3, and Resnet50 CNN architectures and modified versions of these architectures. The fully connected layers of the pre-trained architectures have been reorganized in the modified CNN architectures. These architectures were trained on binary and three-class datasets, revealing their classification performance. The data set was collected from four different sources and consisted of 594 COVID-19, 1345 viral pneumonia, and 1341 normal X-ray images. Models are built using Tensorflow and Keras Libraries with Python programming language. Preprocessing was performed on the dataset by applying resizing, normalization, and one hot encoding operation. Model performances were evaluated according to many performance metrics such as recall, specificity, accuracy, precision, F1-score, confusion matrix, ROC analysis, etc., using 5-fold cross-validation. The highest classification performance was obtained in the modified VGG19 model with 99.84 % accuracy for binary classification (COVID-19 vs. Normal) and in the modified VGG16 model with 98.26 % accuracy for triple classification (COVID-19 vs. Pneumonia vs. Normal). These models have a higher accuracy rate than other studies in the literature. In addition, the number of COVID-19 X-ray images in the dataset used in this study is approximately two times higher than in other studies. Since it is obtained from different sources, it is irregular and does not have a standard. Despite this, it is noteworthy that higher classification performance was achieved than in previous studies. Modified VGG16 and VGG19 models (available at github.com/akaraci/LargeDatasetCovid19) can be used as an auxiliary tool in slight healthcare organizations’ shortage of specialists to detect COVID-19.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Yoshimoto, Yuma, and Hakaru Tamukoh. "FPGA Implementation of a Binarized Dual Stream Convolutional Neural Network for Service Robots." Journal of Robotics and Mechatronics 33, no. 2 (April 20, 2021): 386–99. http://dx.doi.org/10.20965/jrm.2021.p0386.

Повний текст джерела
Анотація:
In this study, with the aim of installing an object recognition algorithm on the hardware device of a service robot, we propose a Binarized Dual Stream VGG-16 (BDS-VGG16) network model to realize high-speed computations and low power consumption. The BDS-VGG16 model has improved in terms of the object recognition accuracy by using not only RGB images but also depth images. It achieved a 99.3% accuracy in tests using an RGB-D Object Dataset. We have also confirmed that the proposed model can be installed in a field-programmable gate array (FPGA). We have further installed BDS-VGG16 Tiny, a small BDS-VGG16 model in XCZU9EG, a system on a chip with a CPU and a middle-scale FPGA on a single chip that can be installed in robots. We have also integrated the BDS-VGG16 Tiny with a robot operating system. As a result, the BDS-VGG16 Tiny installed in the XCZU9EG FPGA realizes approximately 1.9-times more computations than the one installed in the graphics processing unit (GPU) with a power efficiency approximately 8-times higher than that installed in the GPU.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Jahan, Israt, K. M. Aslam Uddin, Saydul Akbar Murad, M. Saef Ullah Miah, Tanvir Zaman Khan, Mehedi Masud, Sultan Aljahdali, and Anupam Kumar Bairagi. "4D: A Real-Time Driver Drowsiness Detector Using Deep Learning." Electronics 12, no. 1 (January 3, 2023): 235. http://dx.doi.org/10.3390/electronics12010235.

Повний текст джерела
Анотація:
There are a variety of potential uses for the classification of eye conditions, including tiredness detection, psychological condition evaluation, etc. Because of its significance, many studies utilizing typical neural network algorithms have already been published in the literature, with good results. Convolutional neural networks (CNNs) are employed in real-time applications to achieve two goals: high accuracy and speed. However, identifying drowsiness at an early stage significantly improves the chances of being saved from accidents. Drowsiness detection can be automated by using the potential of artificial intelligence (AI), which allows us to assess more cases in less time and with a lower cost. With the help of modern deep learning (DL) and digital image processing (DIP) techniques, in this paper, we suggest a CNN model for eye state categorization, and we tested it on three CNN models (VGG16, VGG19, and 4D). A novel CNN model named the 4D model was designed to detect drowsiness based on eye state. The MRL Eye dataset was used to train the model. When trained with training samples from the same dataset, the 4D model performed very well (around 97.53% accuracy for predicting the eye state in the test dataset). The 4D model outperformed the performance of two other pretrained models (VGG16, VGG19). This paper explains how to create a complete drowsiness detection system that predicts the state of a driver’s eyes to further determine the driver’s drowsy state and alerts the driver before any severe threats to road safety.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Teimoor, Ramyar Abdulrahman, and Mihran Abdulrahim Mohammed. "OVID-19 Disease Detection Based on Machine Learning and Chest X-Ray Images." UHD Journal of Science and Technology 6, no. 2 (November 21, 2022): 126–34. http://dx.doi.org/10.21928/uhdjst.v6n2y2022.pp126-134.

Повний текст джерела
Анотація:
Due to increasing population, automated illness identification has become a critical problem in medical research. An automated illness detection framework aids physicians in disease diagnosis by providing precise, consistent, and quick findings, as well as lowering the mortality rate. Coronavirus (COVID-19) has expanded worldwide and is now one of the most severe and acute disorders. To avoid COVID-19 from spreading, making an automatic detection system based on X-ray chest pictures ought to be the quickest diagnostic alternative. The goal of this research is to come up with the best model for detecting COVID-19 diagnosis with the greatest accuracy. Therefore, four models, Convolutional Neural Networks, Residual Network 50, Visual Geometry Group 16 (VGG16), and VGG19, have been evaluated using the same images preprocessing method. In this study, performance metrics include accuracy, precision, recall, and F1 scores are used for evaluating proposed method. According to our findings, the VGG16 model is a viable candidate for detecting COVID-19 instances, because it has highest accuracy; in result overall accuracy of 98.44% in training phase, 98.05% invalidation phase and 96.05% in testing phase is obtained. The results of other performance measurements are shown in the result section, demonstrating that the majority of the approaches are more than 90% accurate. Based on these results, radiologists may find the proposed VGG16 model to be an intriguing and a helpful tool for detecting and diagnosing COVID-19 patients quickly.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Noprisson, Handrie, Ida Nurhaida, and Mariana Purba. "Perbandingan Algoritma Xception dan VGG16 Untuk Pengenalan Lebah Pollen-Bearing." JSAI (Journal Scientific and Applied Informatics) 5, no. 3 (November 26, 2022): 223–27. http://dx.doi.org/10.36085/jsai.v5i3.3611.

Повний текст джерела
Анотація:
Dengan adanya pengamatan yang terjadwal akan membantu pemelihara lebah dalam mengetahui penyakit lebah, kesehatan sarang lebah dan racun yang mungkin terbawa oleh lebah. Jika ini dapat dilakukan dengan bantuan komputer, maka ini akan mengurangi waktu dan biaya pemeliharaan lebah. Selain itu, produksi madu dan sarang akan akan meningkat baik dari sisi kualitas maupun kuantitas. Penelitian ini bertujuan membuat analisis performa kinerja algoritma Xception dan VGG16 untuk pengenalan lebah pollen-bearing. Pada hasil eksperimen diatas model VGG16 dengan fine_tuning memperoleh nilai akurasi testing terbaik yaitu 83.33%. Begitu juga dengan nilai Cohens kappa, F1_score, ROC AUC, Precision, dan Recall terbaik diperoleh model VGG16 dengan fine_tuning. Untuk model Xception terbaik diperoleh dengan tanpa fine tuning yaitu sebesar 72.22%. pada hasil eksperimen ini diperoleh kesimpulan pre-trained model VGG16 dengan fine_tuning lebih cocok digunakan pada dataset bee_pollen dibandingkan dengan model Xception baik dengan fine_tuning ataupun tanpa fine_tuning.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Montaha, Sidratul, Sami Azam, Abul Kalam Muhammad Rakibul Haque Rafid, Pronab Ghosh, Md Zahid Hasan, Mirjam Jonkman, and Friso De Boer. "BreastNet18: A High Accuracy Fine-Tuned VGG16 Model Evaluated Using Ablation Study for Diagnosing Breast Cancer from Enhanced Mammography Images." Biology 10, no. 12 (December 17, 2021): 1347. http://dx.doi.org/10.3390/biology10121347.

Повний текст джерела
Анотація:
Background: Identification and treatment of breast cancer at an early stage can reduce mortality. Currently, mammography is the most widely used effective imaging technique in breast cancer detection. However, an erroneous mammogram based interpretation may result in false diagnosis rate, as distinguishing cancerous masses from adjacent tissue is often complex and error-prone. Methods: Six pre-trained and fine-tuned deep CNN architectures: VGG16, VGG19, MobileNetV2, ResNet50, DenseNet201, and InceptionV3 are evaluated to determine which model yields the best performance. We propose a BreastNet18 model using VGG16 as foundational base, since VGG16 performs with the highest accuracy. An ablation study is performed on BreastNet18, to evaluate its robustness and achieve the highest possible accuracy. Various image processing techniques with suitable parameter values are employed to remove artefacts and increase the image quality. A total dataset of 1442 preprocessed mammograms was augmented using seven augmentation techniques, resulting in a dataset of 11,536 images. To investigate possible overfitting issues, a k-fold cross validation is carried out. The model was then tested on noisy mammograms to evaluate its robustness. Results were compared with previous studies. Results: Proposed BreastNet18 model performed best with a training accuracy of 96.72%, a validating accuracy of 97.91%, and a test accuracy of 98.02%. In contrast to this, VGGNet19 yielded test accuracy of 96.24%, MobileNetV2 77.84%, ResNet50 79.98%, DenseNet201 86.92%, and InceptionV3 76.87%. Conclusions: Our proposed approach based on image processing, transfer learning, fine-tuning, and ablation study has demonstrated a high correct breast cancer classification while dealing with a limited number of complex medical images.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yin, Helin, Yeong Hyeon Gu, Chang-Jin Park, Jong-Han Park, and Seong Joon Yoo. "Transfer Learning-Based Search Model for Hot Pepper Diseases and Pests." Agriculture 10, no. 10 (September 28, 2020): 439. http://dx.doi.org/10.3390/agriculture10100439.

Повний текст джерела
Анотація:
The use of conventional classification techniques to recognize diseases and pests can lead to an incorrect judgment on whether crops are diseased or not. Additionally, hot pepper diseases, such as “anthracnose” and “bacterial spot” can be erroneously judged, leading to incorrect disease recognition. To address these issues, multi-recognition methods, such as Google Cloud Vision, suggest multiple disease candidates and allow the user to make the final decision. Similarity-based image search techniques, along with multi-recognition, can also be used for this purpose. Content-based image retrieval techniques have been used in several conventional similarity-based image searches, using descriptors to extract features such as the image color and edge. In this study, we use eight pre-trained deep learning models (VGG16, VGG19, Resnet 50, etc.) to extract the deep features from images. We conducted experiments using 28,011 image data of 34 types of hot pepper diseases and pests. The search results for diseases and pests were similar to query images with deep features using the k-nearest neighbor method. In top-1 to top-5, when using the deep features based on the Resnet 50 model, we achieved recognition accuracies of approximately 88.38–93.88% for diseases and approximately 95.38–98.42% for pests. When using the deep features extracted from the VGG16 and VGG19 models, we recorded the second and third highest performances, respectively. In the top-10 results, when using the deep features extracted from the Resnet 50 model, we achieved accuracies of 85.6 and 93.62% for diseases and pests, respectively. As a result of performance comparison between the proposed method and the simple convolutional neural network (CNN) model, the proposed method recorded 8.62% higher accuracy in diseases and 14.86% higher in pests than the CNN classification model.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Prasetyo, Simeon Yuda, Ghinaa Zain Nabiilah, Zahra Nabila Izdihar, and Sani Muhamad Isa. "Pneumonia Detection on X-Ray Imaging using Softmax Output in Multilevel Meta Ensemble Algorithm of Deep Convolutional Neural Network Transfer Learning Models." International Journal of Advances in Intelligent Informatics 9, no. 2 (July 8, 2023): 319. http://dx.doi.org/10.26555/ijain.v9i2.884.

Повний текст джерела
Анотація:
Pneumonia is the leading cause of death from a single infection worldwide in children. A proven clinical method for diagnosing pneumonia is through a chest X-ray. However, the resulting X-ray images often need clarification, resulting in subjective judgments. In addition, the process of diagnosis requires a longer time. One technique can be applied by applying advanced deep learning, namely, Transfer Learning with Deep Convolutional Neural Network (Deep CNN) and modified Multilevel Meta Ensemble Learning using Softmax. The purpose of this research was to improve the accuracy of the pneumonia classification model. This study proposes a classification model with a meta-ensemble approach using five classification algorithms: Xception, Resnet 15V2, InceptionV3, VGG16, and VGG19. The ensemble stage used two different concepts, where the first level ensemble combined the output of the Xception, ResNet15V2, and InceptionV3 algorithms. Then the output from the first ensemble level is reused for the following learning process, combined with the output from other algorithms, namely VGG16 and VGG19. This process is called ensemble level two. The classification algorithm used at this stage is the same as the previous stage, using KNN as a classification model. Based on experiments, the model proposed in this study has better accuracy than the others, with a test accuracy value of 98.272%. The benefit of this research could help doctors as a recommendation tool to make more accurate and timely diagnoses, thus speeding up the treatment process and reducing the risk of complications.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lim, Wee Sheng, Ahmad Fakhri Ab. Nasir, Mohd Azraai Mohd Razman, Anwar P. P. Abdul Majeed, Nur Shazwani Kamarudin, and M. Zulfahmi Toh. "Traffic Sign Classification Using Transfer Learning: An Investigation of Feature-Combining Model." MEKATRONIKA 3, no. 2 (July 30, 2021): 37–41. http://dx.doi.org/10.15282/mekatronika.v3i2.7346.

Повний текст джерела
Анотація:
The traffic sign classification system is a technology to help drivers to recognise the traffic sign hence reducing the accident. Many types of learning models have been applied to this technology recently. However, the deployment of learning models is unknown and shown to be non-trivial towards image classification and object detection. The implementation of Transfer Learning (TL) has been demonstrated to be a powerful tool in the extraction of essential features as well as can save lots of training time. Besides, the feature-combining model exhibited great performance in the TL method in many applications. Nonetheless, the utilisation of such methods towards traffic sign classification applications are not yet being evaluated. The present study aims to exploit and investigate the effectiveness of transfer learning feature-combining models, particularly to classify traffic signs. The images were gathered from GTSRB dataset which consists of 10 different types of traffic signs i.e. warning, stop, repair, not enter, traffic light, turn right, speed limit (80km/s), speed limit (50km/s), speed limit (60km/s), and turn left sign board. A total of 7000 images were then split to 70:30 for train and test ratio using a stratified method. The VGG16 and VGG19 TL-features models were used to combine with two classifiers, Random Forest (RF) and Neural Network. In summary, six different pipelines were trained and tested. From the results obtained, the best pipeline was VGG16+VGG19 with RF classifier, which was able to yield an average classification accuracy of 0.9838. The findings showed that the feature-combining model successfully classifies the traffic signs much better than the single TL-feature model. The investigation would be useful for traffic signs classification applications i.e. for ADAS systems
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Shukla, Ratnesh Kumar, Arvind Kumar Tiwari, and Ashish Kumar Jha. "An Efficient Approach of Face Detection and Prediction of Drowsiness Using SVM." Mathematical Problems in Engineering 2023 (April 17, 2023): 1–12. http://dx.doi.org/10.1155/2023/2168361.

Повний текст джерела
Анотація:
This article investigates an issue of road safety and a method for detecting drowsiness in images. More fatal accidents may be averted if fatigued drivers are using this technology accurately and the proposed models provide quick response by recognising the driver’s state of falling asleep. There are the following drowsiness models for depicting the possible eye state classifications as VGG16, VGG19, RESNET50, RESNET101 and MobileNetV2. The absence of a readily available and trustworthy eye dataset is perceived acutely in the realm of eye closure detection. On extracting the deep features of faces with VGG16, 98.68% accuracy has been achieved, VGG19 provides an accuracy of 98.74%, ResNet50 works with 65.69% accuracy, ResNet101 has achieved 95.77%, and MobileNetV2 is achieving 96.00% accuracy with the proposed dataset. The put forth model using the support vector machine (SVM) has been used to evaluate several models, and the present results in terms of loss function and accuracy have been obtained. In the proposed dataset, 99.85% accuracy in detecting facial expressions has been achieved. These experimental results show that the eye closure estimation has a higher accuracy and cheap processing cost, as well as the ability of the proposed framework for drowsiness.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Dr. Sammulal P, Manju D, Dr Seetha M,. "EARLY ACTION PREDICTION USING VGG16 MODEL AND BIDIRECTIONAL LSTM." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 1 (March 10, 2021): 666–72. http://dx.doi.org/10.17762/itii.v9i1.185.

Повний текст джерела
Анотація:
Action prediction plays a key function, where an expected action needs to be identified before the action is completely performed. Prediction means inferring a potential action until it occurs at its early stage. This paper emphasizes on early action prediction, to predict an action before it occurs. In real time scenarios, the early prediction can be very crucial and has many applications like automated driving system, healthcare, video surveillance and other scenarios where a proactive action is needed before the situation goes out of control. VGG16 model is used for the early action prediction which is a convolutional neural network with 16 layers depth. Besides its capability of classifying objects in the frames, the availability of model weights enhances its capability. The model weights are available freely and preferred to used in different applications or models. The VGG-16 model along with Bidirectional structure of Lstm enables the network to provide both backward and forward information at every time step. The results of the proposed approach increased observation ratio ranging from 0.1 to 1.0 compared with the accuracy of GAN model.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Dubey, Arun Kumar, and Vanita Jain. "Automatic facial recognition using VGG16 based transfer learning model." Journal of Information and Optimization Sciences 41, no. 7 (September 8, 2020): 1589–96. http://dx.doi.org/10.1080/02522667.2020.1809126.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Velasco, Jessica S., Jomer V. Catipon, Edmund G. Monilar, Villamor M. Amon, Glenn C. Virrey, and Lean Karlo S. Tolentino. "Classification of Skin Disease Using Transfer Learning in Convolutional Neural Networks." International Journal of Emerging Technology and Advanced Engineering 13, no. 4 (April 5, 2023): 1–7. http://dx.doi.org/10.46338/ijetae0423_01.

Повний текст джерела
Анотація:
Automatic classification of skin disease plays an important role in healthcare especially in dermatology. Dermatologists can determine different skin diseases with the help of an android device and with the use of Artificial Intelligence. Deep learning requires a lot of time to train due to the number of sequential layers and input data involved. Powerful computer involving a Graphic Processing Unit is an ideal approach to the training process due to its parallel processing capability. This study gathered images of 7 types of skin disease prevalent in the Philippines for a skin disease classification system. There are 3400 images composed of different skin diseases like chicken pox, acne, eczema, Pityriasis rosea, psoriasis, Tinea corporis and vitiligo that was used for training and testing of different convolutional network models. This study used transfer learning to skin disease classification using pre-trained weights from different convolutional neural network models such as VGG16, VGG19, MobileNet, ResNet50, InceptionV3, InceptionResNetV2, Xception, DenseNet121, DenseNet169, DenseNet201 and NASNet mobile. The MobileNet model achieved the highest accuracy, 94.1% and the VGG16 model achieved the lowest accuracy, 44.1%.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Verma, Yash, Madhulika Bhatia, Poonam Tanwar, Shaveta Bhatia, and Mridula Batra. "Automatic video censoring system using deep learning." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 6 (December 1, 2022): 6744. http://dx.doi.org/10.11591/ijece.v12i6.pp6744-6755.

Повний текст джерела
Анотація:
<span lang="EN-US">Due to the extensive use of video-sharing platforms and services, the amount of such all kinds of content on the web has become massive. This abundance of information is a problem controlling the kind of content that may be present in such a video. More than telling if the content is suitable for children and sensitive people or not, figuring it out is also important what parts of it contains such content, for preserving parts that would be discarded in a simple broad analysis. To tackle this problem, a comparison was done for popular image deep learning models: MobileNetV2, Xception model, InceptionV3, VGG16, VGG19, ResNet101 and ResNet50 to seek the one that is most suitable for the required application. Also, a system is developed that would automatically censor inappropriate content such as violent scenes with the help of deep learning. The system uses a transfer learning mechanism using the VGG16 model. The experiments suggested that the model showed excellent performance for the automatic censoring application that could also be used in other similar applications.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Norelyaqine, Abderrahim, Rida Azmi, and Abderrahim Saadane. "Architecture of Deep Convolutional Encoder-Decoder Networks for Building Footprint Semantic Segmentation." Scientific Programming 2023 (April 25, 2023): 1–15. http://dx.doi.org/10.1155/2023/8552624.

Повний текст джерела
Анотація:
Building extraction from high-resolution aerial images is critical in geospatial applications such as telecommunications, dynamic urban monitoring, updating geographic databases, urban planning, disaster monitoring, and navigation. Automatic building extraction is a massive task because buildings in various places have varied spectral and geometric qualities. As a result, traditional image processing approaches are insufficient for autonomous building extraction from high-resolution aerial imaging applications. Automatic object extraction from high-resolution images has been achieved using semantic segmentation and deep learning models, which have become increasingly important in recent years. In this study, the U-Net model was used for building extraction, initially designed for biomedical image analysis. The encoder part of the U-Net model has been improved with ResNet50, VGG19, VGG16, DenseNet169, and Xception. However, three other models have been implemented to test the performance of the model studied: PSPNet, FPN, and LinkNet. The performance analysis through the intersection of union method has shown that U-Net with the VGG16 encoder presents the best results compared to the other models with a high IoU score of 83.06%. This research aims to examine the effectiveness of these four approaches for extracting buildings from high-resolution aerial data.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Othman, Kamal, and Ahmad Rad. "An Indoor Room Classification System for Social Robots via Integration of CNN and ECOC." Applied Sciences 9, no. 3 (January 30, 2019): 470. http://dx.doi.org/10.3390/app9030470.

Повний текст джерела
Анотація:
The ability to classify rooms in a home is one of many attributes that are desired for social robots. In this paper, we address the problem of indoor room classification via several convolutional neural network (CNN) architectures, i.e., VGG16, VGG19, & Inception V3. The main objective is to recognize five indoor classes (bathroom, bedroom, dining room, kitchen, and living room) from a Places dataset. We considered 11600 images per class and subsequently fine-tuned the networks. The simulation studies suggest that cleaning the disparate data produced much better results in all the examined CNN architectures. We report that VGG16 & VGG19 fine-tuned models with training on all layers produced the best validation accuracy, with 93.29% and 93.61% on clean data, respectively. We also propose and examine a combination model of CNN and a multi-binary classifier referred to as error correcting output code (ECOC) with the clean data. The highest validation accuracy of 15 binary classifiers reached up to 98.5%, where the average of all classifiers was 95.37%. CNN and CNN-ECOC, and an alternative form called CNN-ECOC Regression, were evaluated in real-time implementation on a NAO humanoid robot. The results show the superiority of the combination model of CNN and ECOC over the conventional CNN. The implications and the challenges of real-time experiments are also discussed in the paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Al-Sarem, Mohammed, Mohammed Al-Asali, Ahmed Yaseen Alqutaibi, and Faisal Saeed. "Enhanced Tooth Region Detection Using Pretrained Deep Learning Models." International Journal of Environmental Research and Public Health 19, no. 22 (November 21, 2022): 15414. http://dx.doi.org/10.3390/ijerph192215414.

Повний текст джерела
Анотація:
The rapid development of artificial intelligence (AI) has led to the emergence of many new technologies in the healthcare industry. In dentistry, the patient’s panoramic radiographic or cone beam computed tomography (CBCT) images are used for implant placement planning to find the correct implant position and eliminate surgical risks. This study aims to develop a deep learning-based model that detects missing teeth’s position on a dataset segmented from CBCT images. Five hundred CBCT images were included in this study. After preprocessing, the datasets were randomized and divided into 70% training, 20% validation, and 10% test data. A total of six pretrained convolutional neural network (CNN) models were used in this study, which includes AlexNet, VGG16, VGG19, ResNet50, DenseNet169, and MobileNetV3. In addition, the proposed models were tested with/without applying the segmentation technique. Regarding the normal teeth class, the performance of the proposed pretrained DL models in terms of precision was above 0.90. Moreover, the experimental results showed the superiority of DenseNet169 with a precision of 0.98. In addition, other models such as MobileNetV3, VGG19, ResNet50, VGG16, and AlexNet obtained a precision of 0.95, 0.94, 0.94, 0.93, and 0.92, respectively. The DenseNet169 model performed well at the different stages of CBCT-based detection and classification with a segmentation accuracy of 93.3% and classification of missing tooth regions with an accuracy of 89%. As a result, the use of this model may represent a promising time-saving tool serving dental implantologists with a significant step toward automated dental implant planning.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Saikat Sundar Pal, Prithwish Raymahapatra, Soumyadeep Paul, Sajal Dolui, Avijit Kumar Chaudhuri, and Sulekha Das. "A Novel Brain Tumor Classification Model Using Machine Learning Techniques." international journal of engineering technology and management sciences 7, no. 2 (2023): 87–98. http://dx.doi.org/10.46647/ijetms.2023.v07i02.011.

Повний текст джерела
Анотація:
The objective of this research work is to classify brain tumor images into 4 different classes by using Convolutional Neural Network (CNN) algorithm i.e. a deep learning method with VGG16 architecture. The four classes are pituitary, glioma, meningioma, and no tumor. The dataset used for this research is a publicly available MRI Image dataset of brain tumor with 7023 images. The methodology followed in this project includes data pre-processing, model building, and evaluation. The dataset is pre-processed by resizing the images to 64x64 and normalizing the pixel values. The VGG16 architecture is used to build the CNN model, and it is trained on the pre-processed data for 10 epochs with a batch size of 64. The model is evaluated using the area under the operating characteristic curve (AUC) metric of the receiver. The results of this project show that the CNN model with VGG16 architecture achieves an AUC of 0.92 for classifying brain tumor images into four different classes. The model performs best for classifying meningioma with AUC of 0.90, followed by pituitary with AUC of 0.91, glioma with AUC of 0.93, and no tumor with AUC of 0.89. In conclusion, the CNN model with VGG16 architecture is an effective approach for classifying brain tumor images into multiple classes. The model achieves high accuracy in identifying different types of brain tumors, which could potentially aid in early diagnosis and treatment of brain tumors.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Adnan, Muhammad, Muhammad Sardaraz, Muhammad Tahir, Muhammad Najam Dar, Mona Alduailij, and Mai Alduailij. "A Robust Framework for Real-Time Iris Landmarks Detection Using Deep Learning." Applied Sciences 12, no. 11 (June 3, 2022): 5700. http://dx.doi.org/10.3390/app12115700.

Повний текст джерела
Анотація:
Iris detection and tracking plays a vital role in human–computer interaction and has become an emerging field for researchers in the last two decades. Typical applications such as virtual reality, augmented reality, gaze detection for customer behavior, controlling computers, and handheld embedded devices need accurate and precise detection of iris landmarks. A significant improvement has been made so far in iris detection and tracking. However, iris landmarks detection in real-time with high accuracy is still a challenge and a computationally expensive task. This is also accompanied with the lack of a publicly available dataset of annotated iris landmarks. This article presents a benchmark dataset and a robust framework for the localization of key landmark points to extract the iris with better accuracy. A number of training sessions have been conducted for MobileNetV2, ResNet50, VGG16, and VGG19 over an iris landmarks dataset, and ImageNet weights are used for model initialization. The Mean Absolute Error (MAE), model loss, and model size are measured to evaluate and validate the proposed model. Results analyses show that the proposed model outperforms other methods on selected parameters. The MAEs of MobileNetV2, ResNet50, VGG16, and VGG19 are 0.60, 0.33, 0.35, and 0.34; the average decrease in size is 60%, and the average reduction in response time is 75% compared to the other models. We collected the images of eyes and annotated them with the help of the proposed algorithm. The generated dataset has been made publicly available for research purposes. The contribution of this research is a model with a more diminutive size and the real-time and accurate prediction of iris landmarks, along with the provided dataset of iris landmark annotations.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Nazhirin, Ahmad Fudolizaenun, Muhammad Rafi Muttaqin, and Teguh Iman Hermanto. "KLASIFIKASI KONDISI BAN KENDARAAN MENGGUNAKAN ARSITEKTUR VGG16." INTI Nusa Mandiri 18, no. 1 (August 1, 2023): 1–12. http://dx.doi.org/10.33480/inti.v18i1.4270.

Повний текст джерела
Анотація:
Tyres are the main component that a vehicle needs to work with reducing vibration due to uneven road surfaces, protecting the wheels from wear to provide stability between the vehicle and the ground helping to improve acceleration to facilitate travel while driving. Wear ensures stability between the vehicle and the ground helps improve acceleration for easy movement and driving. Caused including components that are often used, tires can experience damage such as the appearance of cracks in the tires. Cracks in tires can be triggered by factors such as age or the cause of the road that has been exceeded. Detection of tire cracks at this time is still carried out conventionally, where users see directly the state of the tire whether the tire is in good condition or cracked. Conventional methods are important because they maintain tire quality and rider safety. The Conventional Method certainly has weaknesses because vehicle users must have good vision and the ability to distinguish normal tires or cracked tires, but this method is considered less effective because it still uses human labor, causing the risk of human error (human negligence) which can hinder the process of identifying tire cracks. Based on this problem, this study will develop a deep learning model that can classify cracked tires using the VGG16 architecture. In this study, the model was created using 8 scenarios by changing the value of epochs, to get the best parameters in making the model. The results of the 8 scenarios carried out in this study are the best scenario obtained in scenarios 1,3,4 which get 98% accuracy in model testing
Стилі APA, Harvard, Vancouver, ISO та ін.
43

V., Upendra, and Puviarasi R. "Pancreatic Cancer Prediction Using Hierarchical Convolutional Neural Network and Visual Geometry Group16 CNN Approach on Accuracy and Specificity Performance Measures." ECS Transactions 107, no. 1 (April 24, 2022): 11927–36. http://dx.doi.org/10.1149/10701.11927ecst.

Повний текст джерела
Анотація:
The main aim of the work is to measure the accuracy, sensitivity, and specificity in the detection of pancreatic cancer. Materials and methods: The dataset of 7390 images obtained from Kaggle for training (80%) and testing (20%) of the predictive model developed in Matlab and statistical analysis is done using SPSS software. The hierarchical convolutional neural network (HCNN) is used for developing the model and compared with the visual geometry group (VGG16) based model. Result: The predictive model developed using the HCNN algorithm shows an improvement in accuracy 94.8221.4705, specificity 88.49801.78406, and sensitivity 90.04601.79226 than VGG16 model with accuracy 94.3760, specificity 88.39200.1663, and sensitivity of 88.92600.5808 with the significance of 0.437, 0.497, and 0.165. Conclusion: The outcome of the study confirms that the HCNN based model appears to be of higher accuracy than the VGG16 based model.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Ayumi, Vina. "Perbandingan Model Transfer Learning Untuk Klasifikasi Citra Agricultural Crop." JSAI (Journal Scientific and Applied Informatics) 5, no. 3 (November 26, 2022): 214–22. http://dx.doi.org/10.36085/jsai.v5i3.3612.

Повний текст джерела
Анотація:
Klasifikasi tanaman telah diterapkan selama bertahun-tahun sebagai salah satu komponen utama pemantauan pertanian. Klasifikasi jenis tanaman merupakan teknik penting untuk menyediakan informasi tersebut. Berdasarkan latar belakang diatas, penelitian ini akan melakukan agricultural crop type classification from digital images. Dataset crop images terdiri dari 40 lebih gambar untuk setiap kelas yang ada yaitu kelas maize, wheat, jute, rice dan sugarcane. Dataset kemudian dilakukan augmentasi sehingga menghasilkan 159+ gambar untuk setiap kelas. Proses augmentasi dilakukan dengan proses horizontal flip, rotation, horizontal shift dan vertical shift. Data pengujian dipersiapkan dengan jumlah terdiri dari 51 gambar dengan masing-masing 10 gambar setiap kelasnya. Berdasarkan dari hasil eksperimen, hasil akurasi validasi terbesar didapatkan dengan implementasi metode VGG16 sebesar 96.52%, sedangkan VGG19 mendapatkan akurasi sebesar 94.03%, Resnet50 mendapatkan akurasi sebesar 41.79%, Inceptionv3 mendapatkan akurasi sebesar 94.53% dan EfficientNetB0 mendapatkan akurasi sebesar 20.40%.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Ramayanti, Desi, Sri Dianing Asri, and Lionie Lionie. "Implementasi Model Arsitektur VGG16 dan MobileNetV2 Untuk Klasifikasi Citra Kupu-Kupu." JSAI (Journal Scientific and Applied Informatics) 5, no. 3 (November 26, 2022): 182–87. http://dx.doi.org/10.36085/jsai.v5i3.2864.

Повний текст джерела
Анотація:
Untuk mengurangi populasi kupu-kupu, perlu adanya strategi untuk memantau jumlah dan spesies yang ada pada masing-masing ekosistem. Peran teknologi dapat membantu dalam proses ini, misalnya pengembangan sistem e-Butterfly untuk mengumpulkan gambar spesies kupu-kupu dari berbagai wilayah sehingga dapat diketahui jenis spesies yang masih ada dan letak wilayah hidupnya. Identifikasi spesies kupu-kupu dapat dilakukan dan dibantu oleh komputer dengan serangkaian proses lebih spesifik pada cabang ilmu machine learning dan image processing. Tujuan dari penelitian ini adalah untuk mengetahui kinerja (performance) dari model model arsitektur VGG16 dan MobileNetV2 untuk klasifikasi citra gambar kupu-kupu berdasarkan hasil ekstraksi fitur pada citra gambar tersebut. Dataset penelitian berisi 4955 gambar yang kemudian diberi label 50 butterfly species dengan ukuran 224 X 224 X 3. Akurasi terbaik diperoleh MobileNetV2 tanpa fine-tuning yaitu mencapai presentase 96%, dilanjutkan dengan VGG16 dengan fine-tuning, MobileNetV2 dengan fine-tuning, dan akurasi testing paling kecil diperoleh VGG16 tanpa fine-tuning. Untuk nilai precision, recall, F1-Score, dan Cohens Kappa lebih tinggi diperoleh model MobileNetV2 tanpa fine tuning, hal ini menunjukkan bahwa MobileNetV2 tanpa fine tuning lebih imbang dalam akurasi tiap kelasnya.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Nillmani, Pankaj K. Jain, Neeraj Sharma, Mannudeep K. Kalra, Klaudija Viskovic, Luca Saba, and Jasjit S. Suri. "Four Types of Multiclass Frameworks for Pneumonia Classification and Its Validation in X-ray Scans Using Seven Types of Deep Learning Artificial Intelligence Models." Diagnostics 12, no. 3 (March 7, 2022): 652. http://dx.doi.org/10.3390/diagnostics12030652.

Повний текст джерела
Анотація:
Background and Motivation: The novel coronavirus causing COVID-19 is exceptionally contagious, highly mutative, decimating human health and life, as well as the global economy, by consistent evolution of new pernicious variants and outbreaks. The reverse transcriptase polymerase chain reaction currently used for diagnosis has major limitations. Furthermore, the multiclass lung classification X-ray systems having viral, bacterial, and tubercular classes—including COVID-19—are not reliable. Thus, there is a need for a robust, fast, cost-effective, and easily available diagnostic method. Method: Artificial intelligence (AI) has been shown to revolutionize all walks of life, particularly medical imaging. This study proposes a deep learning AI-based automatic multiclass detection and classification of pneumonia from chest X-ray images that are readily available and highly cost-effective. The study has designed and applied seven highly efficient pre-trained convolutional neural networks—namely, VGG16, VGG19, DenseNet201, Xception, InceptionV3, NasnetMobile, and ResNet152—for classification of up to five classes of pneumonia. Results: The database consisted of 18,603 scans with two, three, and five classes. The best results were using DenseNet201, VGG16, and VGG16, respectively having accuracies of 99.84%, 96.7%, 92.67%; sensitivity of 99.84%, 96.63%, 92.70%; specificity of 99.84, 96.63%, 92.41%; and AUC of 1.0, 0.97, 0.92 (p < 0.0001 for all), respectively. Our system outperformed existing methods by 1.2% for the five-class model. The online system takes <1 s while demonstrating reliability and stability. Conclusions: Deep learning AI is a powerful paradigm for multiclass pneumonia classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Ali, Abd-elmegeid Amin, Iman jebur Ali, and Hassan Shaban Hassan. "Efficient Net: A Deep Learning Framework for Active Fire and Smoke Detection." Journal of Image Processing and Intelligent Remote Sensing, no. 32 (February 1, 2023): 1–10. http://dx.doi.org/10.55529/jipirs.32.1.10.

Повний текст джерела
Анотація:
In this paper, we propose a video-based model for fire detection using a model designed to detect fire and smoke after video processing. Then, the model was developed by increasing the rate of fire detection in a single image and using a pre-trained model. The real-time detection procedure is verified in 0.1 second. Also, an AI technique has been created to detect smoke and fire using deep learning (Effective Network). This is a more stable and faster technology than the current technologies in use. Like VGG16, VGG19, ResNet and the comparison was made with ResNet because it is better than other techniques. The results indicated that the proposed technique was better than ResNet.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Rajasenbagam, T., and S. Jeyanthi. "Semantic Content-Based Image Retrieval System Using Deep Learning Model for Lung Cancer CT Images." Journal of Medical Imaging and Health Informatics 11, no. 10 (October 1, 2021): 2675–82. http://dx.doi.org/10.1166/jmihi.2021.3859.

Повний текст джерела
Анотація:
When compared to other kinds of cancer, lung cancer is more prevalent worldwide. While computerised tomography (CT) images capture the required information for lung cancer diagnosis, clinicians must spend time analysing various sections of CT images. With the exponential rise of CT scans and the increasing prevalence of lung cancer, having an effective method for quickly analysing CT pictures can assist radiologists or doctors in planning for their treatment early. When compared to the usual method of categorising a patient’s kind of lung cancer, retrieving the nearest CT image already saved in the database gives additional information to the patient. This is accomplished through the use of a method for retrieving images depending on their content. In comparison to typical machine learning techniques, deep learning models extract more meaningful characteristics. As a consequence, this study analysis famous pre-trained deep learning models such as the VGG16 deep neural network and the ResNet residual neural network. Among these, the VGG16 model outperforms the others for the supplied hyper parameters. Thus, the characteristics restored through training the VGG16 model on the pre-processed picture are further processed by the KNN method to obtain comparable pictures to the input query picture. As a result of merging the VGG16 and KNN algorithms, this research produces a content-based picture retrieval system. The suggested CBIR system achieves an accuracy of 0.97 and a recall of 0.96.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Dzierżak, Róża, and Zbigniew Omiotek. "Application of Deep Convolutional Neural Networks in the Diagnosis of Osteoporosis." Sensors 22, no. 21 (October 26, 2022): 8189. http://dx.doi.org/10.3390/s22218189.

Повний текст джерела
Анотація:
The aim of this study was to assess the possibility of using deep convolutional neural networks (DCNNs) to develop an effective method for diagnosing osteoporosis based on CT images of the spine. The research material included the CT images of L1 spongy tissue belonging to 100 patients (50 healthy and 50 diagnosed with osteoporosis). Six pre-trained DCNN architectures with different topological depths (VGG16, VGG19, MobileNetV2, Xception, ResNet50, and InceptionResNetV2) were used in the study. The best results were obtained for the VGG16 model characterised by the lowest topological depth (ACC = 95%, TPR = 96%, and TNR = 94%). A specific challenge during the study was the relatively small (for deep learning) number of observations (400 images). This problem was solved using DCNN models pre-trained on a large dataset and a data augmentation technique. The obtained results allow us to conclude that the transfer learning technique yields satisfactory results during the construction of deep models for the diagnosis of osteoporosis based on small datasets of CT images of the spine.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Krishnamoorthy, N., P. Jayanthi, T. Kumaravel, V. A. Sundareshwar, and R. Syed Jamal Harris. "Scalp disease analysis using deep learning models." Applied and Computational Engineering 2, no. 1 (March 22, 2023): 1003–9. http://dx.doi.org/10.54254/2755-2721/2/20220654.

Повний текст джерела
Анотація:
Alopecia areata, folliculitis, hair loss, and dandruff are common scalp hair disorders caused by nutritional imbalances, stress, and pollution in the environment. Specialized treatments, such as hair physiotherapy, have arisen to address this issue. This paper pro-poses a deep learning model that can analysis the scalp diseases with high accuracy, using this model we can predict the scalp diseases which makes easier for them to treat the dis-ease with good treatment using the mobile phones we can predict the diseases simply us-ing its image. This paper deals with an advanced classification model which predicts scalp disease. More than 1100 images are used as training dataset, CNN is used for the classifi-cation and the high accuracy is achieved using VGG16, VGG19 and MobileNetV2.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії