Статті в журналах з теми "InceptionResNetV2"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: InceptionResNetV2.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "InceptionResNetV2".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Ullah, Naeem, Javed Ali Khan, Mohammad Sohail Khan, Wahab Khan, Izaz Hassan, Marwa Obayya, Noha Negm, and Ahmed S. Salama. "An Effective Approach to Detect and Identify Brain Tumors Using Transfer Learning." Applied Sciences 12, no. 11 (June 2, 2022): 5645. http://dx.doi.org/10.3390/app12115645.

Повний текст джерела
Анотація:
Brain tumors are considered one of the most serious, prominent and life-threatening diseases globally. Brain tumors cause thousands of deaths every year around the globe because of the rapid growth of tumor cells. Therefore, timely analysis and automatic detection of brain tumors are required to save the lives of thousands of people around the globe. Recently, deep transfer learning (TL) approaches are most widely used to detect and classify the three most prominent types of brain tumors, i.e., glioma, meningioma and pituitary. For this purpose, we employ state-of-the-art pre-trained TL techniques to identify and detect glioma, meningioma and pituitary brain tumors. The aim is to identify the performance of nine pre-trained TL classifiers, i.e., Inceptionresnetv2, Inceptionv3, Xception, Resnet18, Resnet50, Resnet101, Shufflenet, Densenet201 and Mobilenetv2, by automatically identifying and detecting brain tumors using a fine-grained classification approach. For this, the TL algorithms are evaluated on a baseline brain tumor classification (MRI) dataset, which is freely available on Kaggle. Additionally, all deep learning (DL) models are fine-tuned with their default values. The fine-grained classification experiment demonstrates that the inceptionresnetv2 TL algorithm performs better and achieves the highest accuracy in detecting and classifying glioma, meningioma and pituitary brain tumors, and hence it can be classified as the best classification algorithm. We achieve 98.91% accuracy, 98.28% precision, 99.75% recall and 99% F-measure values with the inceptionresnetv2 TL algorithm, which out-performs the other DL algorithms. Additionally, to ensure and validate the performance of TL classifiers, we compare the efficacy of the inceptionresnetv2 TL algorithm with hybrid approaches, in which we use convolutional neural networks (CNN) for deep feature extraction and a Support Vector Machine (SVM) for classification. Similarly, the experiment’s results show that TL algorithms, and inceptionresnetv2 in particular, out-perform the state-of-the-art DL algorithms in classifying brain MRI images into glioma, meningioma, and pituitary. The hybrid DL approaches used in the experiments are Mobilnetv2, Densenet201, Squeeznet, Alexnet, Googlenet, Inceptionv3, Resnet50, Resnet18, Resnet101, Xception, Inceptionresnetv3, VGG19 and Shufflenet.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yazid Aufar, Muhammad Helmy Abdillah, and Jiki Romadoni. "Web-based CNN Application for Arabica Coffee Leaf Disease Prediction in Smart Agriculture." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 7, no. 1 (February 2, 2023): 71–79. http://dx.doi.org/10.29207/resti.v7i1.4622.

Повний текст джерела
Анотація:
In the agriculture industry, plant diseases provide difficulty, particularly for Arabica coffee production. A first step in eliminating and treating infections to avoid crop damage is recognizing ailments on Arabica coffee leaves. Convolutional neural networks (CNN) are rapidly advancing, making it possible to diagnose Arabica coffee leaf damage without a specialist's help. CNN is aimed to find features adaptively through backpropagation by adding layers including convolutional layers and pooling layers. This study aims to optimize and increase the accuracy of Arabica coffee leaf disease classification utilizing the neural network architectures: ResNet50, InceptionResNetV4, MobileNetV2, and DensNet169. Additionally, this research presents an interactive web platform integrated with the Arabica coffee leaf disease prediction system. Inside this research, 5000 image data points will be divided into five classes—Phoma, Rust, Cescospora, healthy, and Miner—to assess the efficacy of CNN architecture in classifying images of Arabica coffee leaf disease. 80:10:10 is the ratio between training data, validation, and testing. In the testing findings, the InceptionResnetV2 and DensNet169 designs had the highest accuracy, at 100%, followed by the MobileNetV2 architecture at 99% and the ResNet50 architecture at 59%. Even though MobileNetV2 is not more accurate than InceptionResnetV2 and DensNet169, MobileNetV2 is the smallest of the three models. The MobileNetV2 paradigm was chosen for web application development. The system accurately identified and advised treatment for Arabica coffee leaf diseases, as shown by the system's implementation outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jiang, Kaiyuan, Jiawei Zhang, Haibin Wu, Aili Wang, and Yuji Iwahori. "A Novel Digital Modulation Recognition Algorithm Based on Deep Convolutional Neural Network." Applied Sciences 10, no. 3 (February 9, 2020): 1166. http://dx.doi.org/10.3390/app10031166.

Повний текст джерела
Анотація:
The modulation recognition of digital signals under non-cooperative conditions is one of the important research contents here. With the rapid development of artificial intelligence technology, deep learning theory is also increasingly being applied to the field of modulation recognition. In this paper, a novel digital signal modulation recognition algorithm is proposed, which has combined the InceptionResNetV2 network with transfer adaptation, called InceptionResnetV2-TA. Firstly, the received signal is preprocessed and generated the constellation diagram. Then, the constellation diagram is used as the input of the InceptionResNetV2 network to identify different kinds of signals. Transfer adaptation is used for feature extraction and SVM classifier is used to identify the modulation mode of digital signal. The constellation diagram of three typical signals, including Binary Phase Shift Keying(BPSK), Quadrature Phase Shift Keying(QPSK) and 8 Phase Shift Keying(8PSK), was made for the experiments. When the signal-to-noise ratio(SNR) is 4dB, the recognition rates of BPSK, QPSK and 8PSK are respectively 1.0, 0.9966 and 0.9633 obtained by InceptionResnetV2-TA, and at the same time, the recognition rate can be 3% higher than other algorithms. Compared with the traditional modulation recognition algorithms, the experimental results show that the proposed algorithm in this paper has a higher accuracy rate for digital signal modulation recognition at low SNR.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Faruk, Omar, Eshan Ahmed, Sakil Ahmed, Anika Tabassum, Tahia Tazin, Sami Bourouis, and Mohammad Monirujjaman Khan. "A Novel and Robust Approach to Detect Tuberculosis Using Transfer Learning." Journal of Healthcare Engineering 2021 (November 25, 2021): 1–10. http://dx.doi.org/10.1155/2021/1002799.

Повний текст джерела
Анотація:
Deep learning has emerged as a promising technique for a variety of elements of infectious disease monitoring and detection, including tuberculosis. We built a deep convolutional neural network (CNN) model to assess the generalizability of the deep learning model using a publicly accessible tuberculosis dataset. This study was able to reliably detect tuberculosis (TB) from chest X-ray images by utilizing image preprocessing, data augmentation, and deep learning classification techniques. Four distinct deep CNNs (Xception, InceptionV3, InceptionResNetV2, and MobileNetV2) were trained, validated, and evaluated for the classification of tuberculosis and nontuberculosis cases using transfer learning from their pretrained starting weights. With an F1-score of 99 percent, InceptionResNetV2 had the highest accuracy. This research is more accurate than earlier published work. Additionally, it outperforms all other models in terms of reliability. The suggested approach, with its state-of-the-art performance, may be helpful for computer-assisted rapid TB detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Al-Timemy, Ali H., Laith Alzubaidi, Zahraa M. Mosa, Hazem Abdelmotaal, Nebras H. Ghaeb, Alexandru Lavric, Rossen M. Hazarbassanov, Hidenori Takahashi, Yuantong Gu, and Siamak Yousefi. "A Deep Feature Fusion of Improved Suspected Keratoconus Detection with Deep Learning." Diagnostics 13, no. 10 (May 10, 2023): 1689. http://dx.doi.org/10.3390/diagnostics13101689.

Повний текст джерела
Анотація:
Detection of early clinical keratoconus (KCN) is a challenging task, even for expert clinicians. In this study, we propose a deep learning (DL) model to address this challenge. We first used Xception and InceptionResNetV2 DL architectures to extract features from three different corneal maps collected from 1371 eyes examined in an eye clinic in Egypt. We then fused features using Xception and InceptionResNetV2 to detect subclinical forms of KCN more accurately and robustly. We obtained an area under the receiver operating characteristic curves (AUC) of 0.99 and an accuracy range of 97–100% to distinguish normal eyes from eyes with subclinical and established KCN. We further validated the model based on an independent dataset with 213 eyes examined in Iraq and obtained AUCs of 0.91–0.92 and an accuracy range of 88–92%. The proposed model is a step toward improving the detection of clinical and subclinical forms of KCN.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cheng, Wen-Chang, Hung-Chou Hsiao, Yung-Fa Huang, and Li-Hua Li. "Combining Classifiers for Deep Learning Mask Face Recognition." Information 14, no. 7 (July 21, 2023): 421. http://dx.doi.org/10.3390/info14070421.

Повний текст джерела
Анотація:
This research proposes a single network model architecture for mask face recognition using the FaceNet training method. Three pre-trained convolutional neural networks of different sizes are combined, namely InceptionResNetV2, InceptionV3, and MobileNetV2. The models are augmented by connecting an otherwise fully connected network with a SoftMax output layer. We combine triplet loss and categorical cross-entropy loss to optimize the training process. In addition, the learning rate of the optimizer is dynamically updated using the cosine annealing mechanism, which improves the convergence of the model during training. Mask face recognition (MFR) experimental results on a custom MASK600 dataset show that proposed InceptionResNetV2 and InceptionV3 use only 20 training epochs, and MobileNetV2 uses only 50 training epochs, but to achieve more than 93% accuracy than the previous works of MFR with annealing. In addition to reaching a practical level, it saves time for training models and effectively reduces energy costs.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mahdianpari, Masoud, Bahram Salehi, Mohammad Rezaee, Fariba Mohammadimanesh, and Yun Zhang. "Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery." Remote Sensing 10, no. 7 (July 14, 2018): 1119. http://dx.doi.org/10.3390/rs10071119.

Повний текст джерела
Анотація:
Despite recent advances of deep Convolutional Neural Networks (CNNs) in various computer vision tasks, their potential for classification of multispectral remote sensing images has not been thoroughly explored. In particular, the applications of deep CNNs using optical remote sensing data have focused on the classification of very high-resolution aerial and satellite data, owing to the similarity of these data to the large datasets in computer vision. Accordingly, this study presents a detailed investigation of state-of-the-art deep learning tools for classification of complex wetland classes using multispectral RapidEye optical imagery. Specifically, we examine the capacity of seven well-known deep convnets, namely DenseNet121, InceptionV3, VGG16, VGG19, Xception, ResNet50, and InceptionResNetV2, for wetland mapping in Canada. In addition, the classification results obtained from deep CNNs are compared with those based on conventional machine learning tools, including Random Forest and Support Vector Machine, to further evaluate the efficiency of the former to classify wetlands. The results illustrate that the full-training of convnets using five spectral bands outperforms the other strategies for all convnets. InceptionResNetV2, ResNet50, and Xception are distinguished as the top three convnets, providing state-of-the-art classification accuracies of 96.17%, 94.81%, and 93.57%, respectively. The classification accuracies obtained using Support Vector Machine (SVM) and Random Forest (RF) are 74.89% and 76.08%, respectively, considerably inferior relative to CNNs. Importantly, InceptionResNetV2 is consistently found to be superior compared to all other convnets, suggesting the integration of Inception and ResNet modules is an efficient architecture for classifying complex remote sensing scenes such as wetlands.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pohtongkam, Somchai, and Jakkree Srinonchat. "Tactile Object Recognition for Humanoid Robots Using New Designed Piezoresistive Tactile Sensor and DCNN." Sensors 21, no. 18 (September 8, 2021): 6024. http://dx.doi.org/10.3390/s21186024.

Повний текст джерела
Анотація:
A tactile sensor array is a crucial component for applying physical sensors to a humanoid robot. This work focused on developing a palm-size tactile sensor array (56.0 mm × 56.0 mm) to apply object recognition for the humanoid robot hand. This sensor was based on a PCB technology operating with the piezoresistive principle. A conductive polymer composites sheet was used as a sensing element and the matrix array of this sensor was 16 × 16 pixels. The sensitivity of this sensor was evaluated and the sensor was installed on the robot hand. The tactile images, with resolution enhancement using bicubic interpolation obtained from 20 classes, were used to train and test 19 different DCNNs. InceptionResNetV2 provided superior performance with 91.82% accuracy. However, using the multimodal learning method that included InceptionResNetV2 and XceptionNet, the highest recognition rate of 92.73% was achieved. Moreover, this recognition rate improved when the object exploration was applied to demonstrate.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mondal, M. Rubaiyat Hossain, Subrato Bharati, and Prajoy Podder. "CO-IRv2: Optimized InceptionResNetV2 for COVID-19 detection from chest CT images." PLOS ONE 16, no. 10 (October 28, 2021): e0259179. http://dx.doi.org/10.1371/journal.pone.0259179.

Повний текст джерела
Анотація:
This paper focuses on the application of deep learning (DL) in the diagnosis of coronavirus disease (COVID-19). The novelty of this work is in the introduction of optimized InceptionResNetV2 for COVID-19 (CO-IRv2) method. A part of the CO-IRv2 scheme is derived from the concepts of InceptionNet and ResNet with hyperparameter tuning, while the remaining part is a new architecture consisting of a global average pooling layer, batch normalization, dense layers, and dropout layers. The proposed CO-IRv2 is applied to a new dataset of 2481 computed tomography (CT) images formed by collecting two independent datasets. Data resizing and normalization are performed, and the evaluation is run up to 25 epochs. Various performance metrics, including precision, recall, accuracy, F1-score, area under the receiver operating characteristics (AUC) curve are used as performance metrics. The effectiveness of three optimizers known as Adam, Nadam and RMSProp are evaluated in classifying suspected COVID-19 patients and normal people. Results show that for CO-IRv2 and for CT images, the obtained accuracies of Adam, Nadam and RMSProp optimizers are 94.97%, 96.18% and 96.18%, respectively. Furthermore, it is shown here that for the case of CT images, CO-IRv2 with Nadam optimizer has better performance than existing DL algorithms in the diagnosis of COVID-19 patients. Finally, CO-IRv2 is applied to an X-ray dataset of 1662 images resulting in a classification accuracy of 99.40%.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Angurala, Mohit. "Augmented MRI Images for Classification of Normal and Tumors Brain through Transfer Learning Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 5s (June 10, 2023): 536–42. http://dx.doi.org/10.17762/ijritcc.v11i5s.7130.

Повний текст джерела
Анотація:
A brain tumor is a severe malignant condition caused by uncontrolled and abnormal cell division. Recent advances in deep learning have aided the health business in Medical Imaging for the diagnosis of numerous disorders. The most frequent and widely used deep learning algorithm for visual learning and image recognition. This research seeks to multi-classification tumors in the brain from images attained by Magnetic Resonance Imaging (MRI) using deep learning models that have been pre-trained for transfer learning. As per the publicly available MRI brain tumor dataset, brain tumors identified as glioma, meningioma, and pituitary, are accounting for most brain tumors. To ensure the robustness of the suggested method, data acquisition, and preprocessing are performed in the first step followed by data augmentation. Finally, Transfer Learning algorithms including DenseNet, ResNetV2, and InceptionResNetv2 have been applied to find out the optimum algorithm based on various parameters including accuracy, precision, and recall, and are under the curve (AUC). The experimental outcomes show that the model’s validation accuracy is high for DenseNet (about 97%), while ResNetv2 and InceptionResNetv2 achieved 77% and 80% only.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Shoaib, Mohamed R., Mohamed R. Elshamy, Taha E. Taha, Adel S. El-Fishawy, and Fathi E. Abd El-Samie. "Efficient Brain Tumor Detection Based on Deep Learning Models." Journal of Physics: Conference Series 2128, no. 1 (December 1, 2021): 012012. http://dx.doi.org/10.1088/1742-6596/2128/1/012012.

Повний текст джерела
Анотація:
Abstract Brain tumor is an acute cancerous disease that results from abnormal and uncontrollable cell division. Brain tumors are classified via biopsy, which is not normally done before the brain ultimate surgery. Recent advances and improvements in deep learning technology helped the health industry in getting accurate disease diagnosis. In this paper, a Convolutional Neural Network (CNN) is adopted with image pre-processing to classify brain Magnetic Resonance (MR) images into four classes: glioma tumor, meningioma tumor, pituitary tumor and normal patients, is provided. We use a transfer learning model, a CNN-based model that is designed from scratch, a pre-trained inceptionresnetv2 model and a pre-trained inceptionv3 model. The performance of the four proposed models is tested using evaluation metrics including accuracy, sensitivity, specificity, precision, F1_score, Matthew’s correlation coefficient, error, kappa and false positive rate. The obtained results show that the two proposed models are very effective in achieving accuracies of 93.15% and 91.24% for the transfer learning model and BRAIN-TUMOR-net based on CNN, respectively. The inceptionresnetv2 model achieves an accuracy of 86.80% and the inceptionv3 model achieves an accuracy of 85.34%. Practical implementation of the proposed models is presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hasan, Md Kamrul, Tanjum Tanha, Md Ruhul Amin, Omar Faruk, Mohammad Monirujjaman Khan, Sultan Aljahdali, and Mehedi Masud. "Cataract Disease Detection by Using Transfer Learning-Based Intelligent Methods." Computational and Mathematical Methods in Medicine 2021 (December 8, 2021): 1–11. http://dx.doi.org/10.1155/2021/7666365.

Повний текст джерела
Анотація:
One of the most common visual disorders is cataracts, which people suffer from as they get older. The creation of a cloud on the lens of our eyes is known as a cataract. Blurred vision, faded colors, and difficulty seeing in strong light are the main symptoms of this condition. These symptoms frequently result in difficulty doing a variety of tasks. As a result, preliminary cataract detection and prevention may help to minimize the rate of blindness. This paper is aimed at classifying cataract disease using convolutional neural networks based on a publicly available image dataset. In this observation, four different convolutional neural network (CNN) meta-architectures, including InceptionV3, InceptionResnetV2, Xception, and DenseNet121, were applied by using the TensorFlow object detection framework. By using InceptionResnetV2, we were able to attain the avant-garde in cataract disease detection. This model predicted cataract disease with a training loss of 1.09%, a training accuracy of 99.54%, a validation loss of 6.22%, and a validation accuracy of 98.17% on the dataset. This model also has a sensitivity of 96.55% and a specificity of 100%. In addition, the model greatly minimizes training loss while boosting accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Anilkumar, Chunduru, Robbi Jyothsna, Sattaru Vandana Sree, and E. Gothai. "Deep Learning-Based Yoga Posture Specification Using OpenCV and Media Pipe." Applied and Computational Engineering 8, no. 1 (August 1, 2023): 80–86. http://dx.doi.org/10.54254/2755-2721/8/20230085.

Повний текст джерела
Анотація:
Yoga is a years discipline that calls for physical postures, mental focus, and deep breathing. Yoga practice can enhance stamina, power, serenity, flexibility, and wellbeing. Yoga is currently a well-liked type of exercise worldwide. The foundation of yoga is good posture. Even though yoga offers many health advantages, poor posture can lead to issues including muscle sprains and pains. People have become more interested in working online than in person during the last few years. People who are accustomed to internet life and find it difficult to find the time to visit yoga studios benefit from our strategy. Using the web cameras in our system, the model categorizes the yoga poses, and the image is used as input. However, the media pipe library first skeletonizes that image. Utilizing a variety of deep learning models, the input obtained from the yoga postures is improved to improve the asana. On non-skeleton photos, VGG16, InceptionV3, NASNetMobile, YogaConvo2d, and also InceptionResNetV2 came in the order of highest validation accuracy. The proposed model YogaConvo2d with skeletal pictures, which is followed by VGG16, reports validation accuracy in contrast, NASNetMobile, InceptionV3, and InceptionResNetV2.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Al-Shargabi, Amal A., Jowharah F. Alshobaili, Abdulatif Alabdulatif, and Naseem Alrobah. "COVID-CGAN: Efficient Deep Learning Approach for COVID-19 Detection Based on CXR Images Using Conditional GANs." Applied Sciences 11, no. 16 (August 4, 2021): 7174. http://dx.doi.org/10.3390/app11167174.

Повний текст джерела
Анотація:
COVID-19, a novel coronavirus infectious disease, has spread around the world, resulting in a large number of deaths. Due to a lack of physicians, emergency facilities, and equipment, medical systems have been unable to treat all patients in many countries. Deep learning is a promising approach for providing solutions to COVID-19 based on patients’ medical images. As COVID-19 is a new disease, its related dataset is still being collected and published. Small COVID-19 datasets may not be sufficient to build powerful deep learning detection models. Such models are often over-fitted, and their prediction results cannot be generalized. To fill this gap, we propose a deep learning approach for accurately detecting COVID-19 cases based on chest X-ray (CXR) images. For the proposed approach, named COVID-CGAN, we first generated a larger dataset using generative adversarial networks (GANs). Specifically, a customized conditional GAN (CGAN) was designed to generate the target COVID-19 CXR images. The expanded dataset, which contains 84.8% generated images and 15.2% original images, was then used for training five deep detection models: InceptionResNetV2, Xception, SqueezeNet, VGG16, and AlexNet. The results show that the use of the synthetic CXR images, which were generated by the customized CGAN, helped all deep learning models to achieve high detection accuracies. In particular, the highest accuracy was achieved by the InceptionResNetV2 model, which was 99.72% accurate with only ten epochs. All five models achieved kappa coefficients between 0.81 and 1, which is interpreted as an almost perfect agreement between the actual labels and the detected labels. Furthermore, the experiment showed that some models were faster yet smaller compared to the others but could still achieve high accuracy. For instance, SqueezeNet, which is a small network, required only three minutes and achieved comparable accuracy to larger networks such as InceptionResNetV2, which needed about 143 min. Our proposed approach can be applied to other fields with scarce datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Miserlis, Dimitrios, Yuvaraj Munian, Lucas M. Ferrer Cardona, Pedro G. R. Teixeira, Joseph J. DuBose, Mark G. Davies, William Bohannon, Panagiotis Koutakis, and Miltiadis Alamaniotis. "Benchmarking EfficientNetB7, InceptionResNetV2, InceptionV3, and Xception Artificial Neural Networks Applications for Aortic Pathologies Analysis." Journal of Vascular Surgery 77, no. 6 (June 2023): e345. http://dx.doi.org/10.1016/j.jvs.2023.03.475.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Chada, Govind. "Machine Learning Models for Abnormality Detection in Musculoskeletal Radiographs." Reports 2, no. 4 (October 22, 2019): 26. http://dx.doi.org/10.3390/reports2040026.

Повний текст джерела
Анотація:
Increasing radiologist workloads and increasing primary care radiology services make it relevant to explore the use of artificial intelligence (AI) and particularly deep learning to provide diagnostic assistance to radiologists and primary care physicians in improving the quality of patient care. This study investigates new model architectures and deep transfer learning to improve the performance in detecting abnormalities of upper extremities while training with limited data. DenseNet-169, DenseNet-201, and InceptionResNetV2 deep learning models were implemented and evaluated on the humerus and finger radiographs from MURA, a large public dataset of musculoskeletal radiographs. These architectures were selected because of their high recognition accuracy in a benchmark study. The DenseNet-201 and InceptionResNetV2 models, employing deep transfer learning to optimize training on limited data, detected abnormalities in the humerus radiographs with 95% CI accuracies of 83–92% and high sensitivities greater than 0.9, allowing for these models to serve as useful initial screening tools to prioritize studies for expedited review. The performance in the case of finger radiographs was not as promising, possibly due to the limitations of large inter-radiologist variation. It is suggested that the causes of this variation be further explored using machine learning approaches, which may lead to appropriate remediation.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Silva, Luan Oliveira, Leandro dos Santos Araújo, Victor Ferreira Souza, Raimundo Matos Barros Neto, and Adam Santos. "Comparative Analysis of Convolutional Neural Networks Applied in the Detection of Pneumonia Through X-Ray Images of Children." Learning and Nonlinear Models 18, no. 2 (June 30, 2021): 4–15. http://dx.doi.org/10.21528/lnlm-vol18-no2-art1.

Повний текст джерела
Анотація:
Pneumonia is one of the most common medical problems in clinical practice and is the leading fatal infectious disease worldwide. According to the World Health Organization, pneumonia kills about 2 million children under the age of 5 and is constantly estimated to be the leading cause of infant mortality, killing more children than AIDS, malaria, and measles combined. A key element in the diagnosis is radiographic data, as chest x-rays are routinely obtained as a standard of care and can aid to differentiate the types of pneumonia. However, a rapid radiological interpretation of images is not always available, particularly in places with few resources, where childhood pneumonia has the highest incidence and mortality rates. As an alternative, the application of deep learning techniques for the classification of medical images has grown considerably in recent years. This study presents five implementations of convolutional neural networks (CNNs): ResNet50, VGG-16, InceptionV3, InceptionResNetV2, and ResNeXt50. To support the diagnosis of the disease, these CNNs were applied to solve the classification problem of medical radiographs from people with pneumonia. InceptionResNetV2 obtained the best recall and precision results for the Normal and Pneumonia classes, 93.95% and 97.52% respectively. ResNeXt50 achieved the best precision and f1-score results for the Normal class (94.62% and 94.25% respectively) and the recall and f1-score results for the Pneumonia class (97.80% and 97.65%, respectively).
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Rho, Jinhyung, Sung-Min Shin, Kyoungsun Jhang, Gwanghee Lee, Keun-Ho Song, Hyunguk Shin, Kiwon Na, Hyo-Jung Kwon, and Hwa-Young Son. "Deep learning-based diagnosis of feline hypertrophic cardiomyopathy." PLOS ONE 18, no. 2 (February 2, 2023): e0280438. http://dx.doi.org/10.1371/journal.pone.0280438.

Повний текст джерела
Анотація:
Feline hypertrophic cardiomyopathy (HCM) is a common heart disease affecting 10–15% of all cats. Cats with HCM exhibit breathing difficulties, lethargy, and heart murmur; furthermore, feline HCM can also result in sudden death. Among various methods and indices, radiography and ultrasound are the gold standards in the diagnosis of feline HCM. However, only 75% accuracy has been achieved using radiography alone. Therefore, we trained five residual architectures (ResNet50V2, ResNet152, InceptionResNetV2, MobileNetV2, and Xception) using 231 ventrodorsal radiographic images of cats (143 HCM and 88 normal) and investigated the optimal architecture for diagnosing feline HCM through radiography. To ensure the generalizability of the data, the x-ray images were obtained from 5 independent institutions. In addition, 42 images were used in the test. The test data were divided into two; 22 radiographic images were used in prediction analysis and 20 radiographic images of cats were used in the evaluation of the peeking phenomenon and the voting strategy. As a result, all models showed > 90% accuracy; Resnet50V2: 95.45%; Resnet152: 95.45; InceptionResNetV2: 95.45%; MobileNetV2: 95.45% and Xception: 95.45. In addition, two voting strategies were applied to the five CNN models; softmax and majority voting. As a result, the softmax voting strategy achieved 95% accuracy in combined test data. Our findings demonstrate that an automated deep-learning system using a residual architecture can assist veterinary radiologists in screening HCM.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Xia, Jun, Hongjiang Liu, and Linfu Zhu. "Landslide Hazard Identification Based on Deep Learning and Sentinel-2 Remote Sensing Imagery." Journal of Physics: Conference Series 2258, no. 1 (April 1, 2022): 012031. http://dx.doi.org/10.1088/1742-6596/2258/1/012031.

Повний текст джерела
Анотація:
Abstract Landslide is one of the common geological disasters, which seriously threatens human life and property safety. It is particularly important to quickly identify landslide information. This paper takes the Wenchuan earthquake landslide area as the research area, and uses 7 deep learning methods(4-Layer-CNN, AlexNet, ResNet152V2, DenseNet201, InceptionV3, Xception and InceptionResNetV2) to discuss landslide detection methods based on Sentinel-2 remote sensing images. Using the marked landslide and non-landslide sample points, the Sentinel-2 remote sensing image was sliced into 80×80 pixel tiles, and then the deep learning method was used for model training, verification and testing. The results show that : (1) Among the 7 deep learning models, the F1-Score of the DenseNet201 model is the largest, reaching 0.8872, and the RMSE is the smallest 0.2503. It can be seen that the DenseNet model has a good recognition effect on landslide samples, with an accuracy of 0.9172; (2) Second It is InceptionResNetV2, the F1-Score is 0.8721, the RMSE is 0.2721, and the landslide sample recognition accuracy is 0.9012; (3) the worst effect is AlexNet, the minimum F1-Score is only 0.7263, the maximum RMSE is 0.4022, and the accuracy is 0.8295. It can be seen that the deep learning method is applied to Sentinel-2 remote sensing images for landslide image detection, and the accuracy can reach 91.72%, which can quickly and accurately identify landslide information, and improve the method reference and decision basis for disaster prevention and mitigation.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Harahap, Mawaddah, Em Manuel Laia, Lilis Suryani Sitanggang, Melda Sinaga, Daniel Franci Sihombing, and Amir Mahmud Husein. "Deteksi Penyakit Covid-19 Pada Citra X-Ray Dengan Pendekatan Convolutional Neural Network (CNN)." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 6, no. 1 (February 27, 2022): 70–77. http://dx.doi.org/10.29207/resti.v6i1.3373.

Повний текст джерела
Анотація:
The Coronavirus (COVID-19) pandemic has resulted in the worldwide death rate continuing to increase significantly, identification using medical imaging such as X-rays and computed tomography plays an important role in helping medical personnel diagnose positive negative COVID-19 patients, several works have proven the learning approach in-depth using a Convolutional Neural Network (CNN) produces good accuracy for COVID detection based on chest X-Ray images, in this study we propose different transfer learning architectures VGG19, MobileNetV2, InceptionResNetV2 and ResNet (ResNet101V2, ResNet152V2 and ResNet50V2) to analyze their performance, testing conducted in the Google Colab work environment as a platform for creating Python-based applications and all datasets are stored on the Google Drive application, the preprocessing stages are carried out before training and testing, the datasets are grouped into theNormal and COVID folders then combined m become a set of data by dividing them into training sets of 352 images, testing 110 images and validating 88 images, then the detection results are labeled with the number 1 means COVID and the number 0 for NORMAL. Based on the test results, the ResNet50V2 model has a better accuracy rate than other models with an accuracy level of about 0.95 (95%) Precision 0.96, Recall 0.973, F1-Score 0.966, and Support of 74, then InceptionResNetV2, VGG19, and MobileNetV2, so that ResNet50V2-based CNNs can be used as initial identification for the classification of a patientinfected with COVID or NORMAL.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Xie, Qinghua, Pengyu Chen, Zhaohuan Li, and Renfeng Xie. "Automatic Segmentation and Classification for Antinuclear Antibody Images Based on Deep Learning." Computational Intelligence and Neuroscience 2023 (February 8, 2023): 1–9. http://dx.doi.org/10.1155/2023/1353965.

Повний текст джерела
Анотація:
Antinuclear antibodies (ANAs) testing is the main serological diagnosis screening test for autoimmune diseases. ANAs testing is conducted principally by the indirect immunofluorescence (IIF) on human epithelial cell-substrate (HEp-2) protocol. However, due to its high variability and human subjectivity, there is an insistent need to develop an efficient method for automatic image segmentation and classification. This article develops an automatic segmentation and classification framework based on artificial intelligence (AI) on the ANA images. The Otsu thresholding method and watershed segmentation algorithm are adopted to segment IIF images of cells. Moreover, multiple texture features such as scale-invariant feature transform (SIFT), local binary pattern (LBP), cooccurrence among adjacent LBPs (CoALBP), and rotation invariant cooccurrence among adjacent LBPs (RIC-LBP) are utilized. Firstly, this article adopts traditional machine learning methods such as support vector machine (SVM), k-nearest neighbor algorithm (KNN), and random forest (RF) and then uses ensemble classifier (ECLF) combined with soft voting rules to merge these machine learning methods for classification. The deep learning method InceptionResNetV2 is also utilized to train on the classification of cell images. Eventually, the best accuracy of 0.9269 on the Changsha dataset and 0.9635 on the ICPR 2016 dataset for the traditional methods is obtained by a combination of SIFT and RIC-LBP with the ECLF classifier, and the best accuracy obtained by the InceptionResNetV2 is 0.9465 and 0.9836 separately, which outperforms other schemes.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Nguyen, Viet Dung, Ngoc Dung Bui, and Hoang Khoi Do. "Skin Lesion Classification on Imbalanced Data Using Deep Learning with Soft Attention." Sensors 22, no. 19 (October 4, 2022): 7530. http://dx.doi.org/10.3390/s22197530.

Повний текст джерела
Анотація:
Today, the rapid development of industrial zones leads to an increased incidence of skin diseases because of polluted air. According to a report by the American Cancer Society, it is estimated that in 2022 there will be about 100,000 people suffering from skin cancer and more than 7600 of these people will not survive. In the context that doctors at provincial hospitals and health facilities are overloaded, doctors at lower levels lack experience, and having a tool to support doctors in the process of diagnosing skin diseases quickly and accurately is essential. Along with the strong development of artificial intelligence technologies, many solutions to support the diagnosis of skin diseases have been researched and developed. In this paper, a combination of one Deep Learning model (DenseNet, InceptionNet, ResNet, etc) with Soft-Attention, which unsupervisedly extract a heat map of main skin lesions. Furthermore, personal information including age and gender are also used. It is worth noting that a new loss function that takes into account the data imbalance is also proposed. Experimental results on data set HAM10000 show that using InceptionResNetV2 with Soft-Attention and the new loss function gives 90 percent accuracy, mean of precision, F1-score, recall, and AUC of 0.81, 0.81, 0.82, and 0.99, respectively. Besides, using MobileNetV3Large combined with Soft-Attention and the new loss function, even though the number of parameters is 11 times less and the number of hidden layers is 4 times less, it achieves an accuracy of 0.86 and 30 times faster diagnosis than InceptionResNetV2.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Li, James, Chieh-Ju Chao, Jiwoong Jason Jeong, Juan Maria Farina, Amith R. Seri, Timothy Barry, Hana Newman, et al. "Developing an Echocardiography-Based, Automatic Deep Learning Framework for the Differentiation of Increased Left Ventricular Wall Thickness Etiologies." Journal of Imaging 9, no. 2 (February 18, 2023): 48. http://dx.doi.org/10.3390/jimaging9020048.

Повний текст джерела
Анотація:
Aims:Increased left ventricular (LV) wall thickness is frequently encountered in transthoracic echocardiography (TTE). While accurate and early diagnosis is clinically important, given the differences in available therapeutic options and prognosis, an extensive workup is often required to establish the diagnosis. We propose the first echo-based, automated deep learning model with a fusion architecture to facilitate the evaluation and diagnosis of increased left ventricular (LV) wall thickness. Methods and Results: Patients with an established diagnosis of increased LV wall thickness (hypertrophic cardiomyopathy (HCM), cardiac amyloidosis (CA), and hypertensive heart disease (HTN)/others) between 1/2015 and 11/2019 at Mayo Clinic Arizona were identified. The cohort was divided into 80%/10%/10% for training, validation, and testing sets, respectively. Six baseline TTE views were used to optimize a pre-trained InceptionResnetV2 model. Each model output was used to train a meta-learner under a fusion architecture. Model performance was assessed by multiclass area under the receiver operating characteristic curve (AUROC). A total of 586 patients were used for the final analysis (194 HCM, 201 CA, and 191 HTN/others). The mean age was 55.0 years, and 57.8% were male. Among the individual view-dependent models, the apical 4-chamber model had the best performance (AUROC: HCM: 0.94, CA: 0.73, and HTN/other: 0.87). The final fusion model outperformed all the view-dependent models (AUROC: HCM: 0.93, CA: 0.90, and HTN/other: 0.92). Conclusion: The echo-based InceptionResnetV2 fusion model can accurately classify the main etiologies of increased LV wall thickness and can facilitate the process of diagnosis and workup.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kumar Dasari, Sunil, and Shilpa Mehta. "Scene Based Text Recognition From Natural Images and Classification Based on Hybrid CNN Models with Performance Evaluation." International journal of electrical and computer engineering systems 14, no. 3 (March 28, 2023): 293–300. http://dx.doi.org/10.32985/ijeces.14.3.7.

Повний текст джерела
Анотація:
Similar to the recognition of captions, pictures, or overlapped text that typically appears horizontally, multi-oriented text recognition in video frames is challenging since it has high contrast related to its background. Multi-oriented form of text normally denotes scene text which makes text recognition further stimulating and remarkable owing to the disparaging features of scene text. Hence, predictable text detection approaches might not give virtuous outcomes for multi-oriented scene text detection. Text detection from any such natural image has been challenging since earlier times, and significant enhancement has been made recently to execute this task. While coming to blurred, low-resolution, and small-sized images, most of the previous research conducted doesn’t work well; hence, there is a research gap in that area. Scene-based text detection is a key area due to its adverse applications. One such primary reason for the failure of earlier methods is that the existing methods could not generate precise alignments across feature areas and targets for those images. This research focuses on scene-based text detection with the aid of YOLO based object detector and a CNN-based classification approach. The experiments were conducted in MATLAB 2019A, and the packages used were RESNET50, INCEPTIONRESNETV2, and DENSENET201. The efficiency of the proposed methodology - Hybrid resnet -YOLO procured maximum accuracy of 91%, Hybrid inceptionresnetv2 -YOLO of 81.2%, and Hybrid densenet201 -YOLO of 83.1% and was verified by comparing it with the existing research works Resnet50 of 76.9%, ResNet-101 of 79.5%, and ResNet-152 of 82%.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Emara, Heba M., Mohamed R. Shoaib, Walid El-Shafai, Mohamed Elwekeil, Ezz El-Din Hemdan, Mostafa M. Fouda, Taha E. Taha, Adel S. El-Fishawy, El-Sayed M. El-Rabaie, and Fathi E. Abd El-Samie. "Simultaneous Super-Resolution and Classification of Lung Disease Scans." Diagnostics 13, no. 7 (April 2, 2023): 1319. http://dx.doi.org/10.3390/diagnostics13071319.

Повний текст джерела
Анотація:
Acute lower respiratory infection is a leading cause of death in developing countries. Hence, progress has been made for early detection and treatment. There is still a need for improved diagnostic and therapeutic strategies, particularly in resource-limited settings. Chest X-ray and computed tomography (CT) have the potential to serve as effective screening tools for lower respiratory infections, but the use of artificial intelligence (AI) in these areas is limited. To address this gap, we present a computer-aided diagnostic system for chest X-ray and CT images of several common pulmonary diseases, including COVID-19, viral pneumonia, bacterial pneumonia, tuberculosis, lung opacity, and various types of carcinoma. The proposed system depends on super-resolution (SR) techniques to enhance image details. Deep learning (DL) techniques are used for both SR reconstruction and classification, with the InceptionResNetv2 model used as a feature extractor in conjunction with a multi-class support vector machine (MCSVM) classifier. In this paper, we compare the proposed model performance to those of other classification models, such as Resnet101 and Inceptionv3, and evaluate the effectiveness of using both softmax and MCSVM classifiers. The proposed system was tested on three publicly available datasets of CT and X-ray images and it achieved a classification accuracy of 98.028% using a combination of SR and InceptionResNetv2. Overall, our system has the potential to serve as a valuable screening tool for lower respiratory disorders and assist clinicians in interpreting chest X-ray and CT images. In resource-limited settings, it can also provide a valuable diagnostic support.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Mousavi, Seyed Mohammad, and Soodeh Hosseini. "A Convolutional Neural Network Model for Detection of COVID -19 Disease and Pneumonia." Journal of Health and Biomedical Informatics 10, no. 1 (May 22, 2023): 41–56. http://dx.doi.org/10.34172/jhbmi.2023.13.

Повний текст джерела
Анотація:
Introduction: COVID -19 has had a devastating impact on public health around the world. Since early diagnosis and timely treatment have an impact on reducing mortality due to infection with COVID -19 and existing diagnostic methods such as RT -PCR test are prone to error, the alternative solution is to use artificial intelligence and image processing techniques. The overall goal is to introduce an intelligent model based on deep learning and convolutional neural network to identify cases of COVID -19 and pneumonia for the purpose of subsequent treatment measures with the help of lung medical images. Method: The proposed model includes two datasets of radiography and CT -scan. These dataset s are pre - processed and the data enhancement process is applied to the images. In the next step, three architectures EfficientNetB4, InceptionV3 , and InceptionResNetV2 are used using transfer learning method. Results: The best result obtained for CT -scan images belongs to the InceptionResNetV2 architecture with an accuracy of 99.366% and for radiology images related to the InceptionV3 architecture with an accuracy of 96.943%. In addition, the results indicate that CT -scan images have more features than radiographic images, and disease diagnosis is performed more accurately on this type of data. Conclusion: The proposed model based on a convolutional neural network has higher accuracy than other similar models. Also, this method by generating instant results can help in the initial evaluation of patients in medical centers, especially during the peak of epidemics, when medical centers face various challenges , such as lacking specialists and medical staffs .
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Duman, Burhan, and Ahmet Ali Süzen. "A Study on Deep Learning Based Classification of Flower Images." International Journal of Advanced Networking and Applications 14, no. 02 (2022): 5385–89. http://dx.doi.org/10.35444/ijana.2022.14209.

Повний текст джерела
Анотація:
Deep learning techniques are becoming more and more common in computer vision applications in different fields, such as object recognition, classification, and segmentation. In the study, a classification application was made for flower species detection using the deep learning method of different datasets. The pre-learning MobileNet, DenseNet, Inception, and ResNet models, which are the basis of deep learning, are discussed separately. In experimental studies, models were trained with flower classes with five (flower dataset) and seventeen (Oxford 17) types of flowers and their performances were compared. Performance tests, it is aimed to measure the success of different model optimizers in each data set. For the Oxford-17 data set in experimental studies; With Adam optimizer 93.14% in MobileNetV2 model, 95.59% with SGD optimizer, 92.85% with Adam optimizer in ResNet152v2 model, 88.96% with SGD optimizer, 91.55% with Adam optimizer in InceptionV3 model, 91.55% with SGD optimizer Validation accuracy of 87.66, InceptionResnetV2 model was 86.36% with Adam optimizer, 83.76% with SGD optimizer, 94.16% with Adam optimizer in DenseNet169 model and 90.91% with SGD optimizer. For the dataset named Flower dataset; With Adam optimizer 91.62% in MobileNetV2 model, 80.80% with SGD optimizer, 92.94% with Adam optimizer in ResNet152v2 model, 85.03% with SGD optimizer, 90.71% with Adam optimizer in InceptionV3 model, 82% with SGD optimizer, 62, InceptionResnetV2 model, 88.62% with Adam optimizer, 81.84% with SGD optimizer, 90.03% with Adam optimizer in DenseNet169 model, 82.89% with SGD optimizer. When the results are compared, it is seen that the performance rate of deep learning methods varies in some models depending on the number of classes in the data set, and in most models depending on the optimizer type.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Nassif, Ali Bou, Ismail Shahin, Mohamed Bader, Abdelfatah Hassan, and Naoufel Werghi. "COVID-19 Detection Systems Using Deep-Learning Algorithms Based on Speech and Image Data." Mathematics 10, no. 4 (February 11, 2022): 564. http://dx.doi.org/10.3390/math10040564.

Повний текст джерела
Анотація:
The global epidemic caused by COVID-19 has had a severe impact on the health of human beings. The virus has wreaked havoc throughout the world since its declaration as a worldwide pandemic and has affected an expanding number of nations in numerous countries around the world. Recently, a substantial amount of work has been done by doctors, scientists, and many others working on the frontlines to battle the effects of the spreading virus. The integration of artificial intelligence, specifically deep- and machine-learning applications, in the health sector has contributed substantially to the fight against COVID-19 by providing a modern innovative approach for detecting, diagnosing, treating, and preventing the virus. In this proposed work, we focus mainly on the role of the speech signal and/or image processing in detecting the presence of COVID-19. Three types of experiments have been conducted, utilizing speech-based, image-based, and speech and image-based models. Long short-term memory (LSTM) has been utilized for the speech classification of the patient’s cough, voice, and breathing, obtaining an accuracy that exceeds 98%. Moreover, CNN models VGG16, VGG19, Densnet201, ResNet50, Inceptionv3, InceptionResNetV2, and Xception have been benchmarked for the classification of chest X-ray images. The VGG16 model outperforms all other CNN models, achieving an accuracy of 85.25% without fine-tuning and 89.64% after performing fine-tuning techniques. Furthermore, the speech–image-based model has been evaluated using the same seven models, attaining an accuracy of 82.22% by the InceptionResNetV2 model. Accordingly, it is inessential for the combined speech–image-based model to be employed for diagnosis purposes since the speech-based and image-based models have each shown higher terms of accuracy than the combined model.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kiratiratanapruk, Kantip, Pitchayagan Temniranrat, Wasin Sinthupinyo, Panintorn Prempree, Kosom Chaitavon, Supanit Porntheeraphat, and Anchalee Prasertsak. "Development of Paddy Rice Seed Classification Process using Machine Learning Techniques for Automatic Grading Machine." Journal of Sensors 2020 (July 1, 2020): 1–14. http://dx.doi.org/10.1155/2020/7041310.

Повний текст джерела
Анотація:
To increase productivity in agricultural production, speed, and accuracy is the key requirement for long-term economic growth, competitiveness, and sustainability. Traditional manual paddy rice seed classification operations are costly and unreliable because human decisions in identifying objects and issues are inconsistent, subjective, and slow. Machine vision technology provides an alternative for automated processes, which are nondestructive, cost-effective, fast, and accurate techniques. In this work, we presented a study that utilized machine vision technology to classify 14 Oryza sativa rice varieties. Each cultivar used over 3,500 seed samples, a total of close to 50,000 seeds. There were three main processes, including preprocessing, feature extraction, and rice variety classification. We started the first process using a seed orientation method that aligned the seed bodies in the same direction. Next, a quality screening method was applied to detect unusual physical seed samples. Their physical information including shape, color, and texture properties was extracted to be data representations for the classification. Four methods (LR, LDA, k-NN, and SVM) of statistical machine learning techniques and five pretrained models (VGG16, VGG19, Xception, InceptionV3, and InceptionResNetV2) on deep learning techniques were applied for the classification performance comparison. In our study, the rice dataset were classified in both subgroups and collective groups for studying ambiguous relationships among them. The best accuracy was obtained from the SVM method at 90.61%, 82.71%, and 83.9% in subgroups 1 and 2 and the collective group, respectively, while the best accuracy on the deep learning techniques was at 95.15% from InceptionResNetV2 models. In addition, we showed an improvement in the overall performance of the system in terms of data qualities involving seed orientation and quality screening. Our study demonstrated a practical design of rice classification using machine vision technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Boonsim, Noppakun, and Saranya Kanjaruek. "Optimized transfer learning for polyp detection." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 17, no. 1 (February 18, 2023): 73–81. http://dx.doi.org/10.37936/ecti-cit.2023171.250910.

Повний текст джерела
Анотація:
Early diagnosis of colorectal cancer focuses on detecting polyps in the colon as early as possible so that patients can have the best chances for success- ful treatment. This research presents the optimized parameters for polyp detection using a deep learning technique. Polyp and non-polyp images are trained on the InceptionResnetV2 model by the Faster Region Con- volutional Neural Networks (Faster R-CNN) framework to identify polyps within the colon images. The proposed method revealed more remarkable results than previous works, precision: 92.9 %, recall: 82.3%, F1-Measure: 87.3%, and F2-Measure: 54.6% on public ETIS-LARIB data set. This detection technique can reduce the chances of missing polyps during a pro- longed clinical inspection and can improve the chances of detecting multiple polyps in colon images.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Pathik, Nikhlesh, Rajeev Kumar Gupta, Yatendra Sahu, Ashutosh Sharma, Mehedi Masud, and Mohammed Baz. "AI Enabled Accident Detection and Alert System Using IoT and Deep Learning for Smart Cities." Sustainability 14, no. 13 (June 24, 2022): 7701. http://dx.doi.org/10.3390/su14137701.

Повний текст джерела
Анотація:
As the number of vehicles increases, road accidents are on the rise every day. According to the World Health Organization (WHO) survey, 1.4 million people have died, and 50 million people have been injured worldwide every year. The key cause of death is the unavailability of medical care at the accident site or the high response time in the rescue operation. A cognitive agent-based collision detection smart accident alert and rescue system will help us to minimize delays in a rescue operation that could save many lives. With the growing popularity of smart cities, intelligent transportation systems (ITS) are drawing major interest in academia and business, and are considered as a means to improve road safety in smart cities. This article proposed an intelligent accident detection and rescue system which mimics the cognitive functions of the human mind using the Internet of Things (IoTs) and the Artificial Intelligence system (AI). An IoT kit is developed that detects the accident and collects all accident-related information, such as position, pressure, gravitational force, speed, etc., and sends it to the cloud. In the cloud, once the accident is detected, a deep learning (DL) model is used to validate the output of the IoT module and activate the rescue module. Once the accident is detected by the DL module, all the closest emergency services such as the hospital, police station, mechanics, etc., are notified. Ensemble transfer learning with dynamic weights is used to minimize the false detection rate. Due to the dataset’s unavailability, a personalized dataset is generated from the various videos available on the Internet. The proposed method is validated by a comparative analysis of ResNet and InceptionResnetV2. The experiment results show that InceptionResnetV2 provides a better performance compared to ResNet with training, validation, and a test accuracy of 98%, respectively. To measure the performance of the proposed approach in the real world, it is validated on the toy car.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhang, Shanxin, Hao Feng, Shaoyu Han, Zhengkai Shi, Haoran Xu, Yang Liu, Haikuan Feng, Chengquan Zhou, and Jibo Yue. "Monitoring of Soybean Maturity Using UAV Remote Sensing and Deep Learning." Agriculture 13, no. 1 (December 30, 2022): 110. http://dx.doi.org/10.3390/agriculture13010110.

Повний текст джерела
Анотація:
Soybean breeders must develop early-maturing, standard, and late-maturing varieties for planting at different latitudes to ensure that soybean plants fully utilize solar radiation. Therefore, timely monitoring of soybean breeding line maturity is crucial for soybean harvesting management and yield measurement. Currently, the widely used deep learning models focus more on extracting deep image features, whereas shallow image feature information is ignored. In this study, we designed a new convolutional neural network (CNN) architecture, called DS-SoybeanNet, to improve the performance of unmanned aerial vehicle (UAV)-based soybean maturity information monitoring. DS-SoybeanNet can extract and utilize both shallow and deep image features. We used a high-definition digital camera on board a UAV to collect high-definition soybean canopy digital images. A total of 2662 soybean canopy digital images were obtained from two soybean breeding fields (fields F1 and F2). We compared the soybean maturity classification accuracies of (i) conventional machine learning methods (support vector machine (SVM) and random forest (RF)), (ii) current deep learning methods (InceptionResNetV2, MobileNetV2, and ResNet50), and (iii) our proposed DS-SoybeanNet method. Our results show the following: (1) The conventional machine learning methods (SVM and RF) had faster calculation times than the deep learning methods (InceptionResNetV2, MobileNetV2, and ResNet50) and our proposed DS-SoybeanNet method. For example, the computation speed of RF was 0.03 s per 1000 images. However, the conventional machine learning methods had lower overall accuracies (field F2: 63.37–65.38%) than the proposed DS-SoybeanNet (Field F2: 86.26%). (2) The performances of the current deep learning and conventional machine learning methods notably decreased when tested on a new dataset. For example, the overall accuracies of MobileNetV2 for fields F1 and F2 were 97.52% and 52.75%, respectively. (3) The proposed DS-SoybeanNet model can provide high-performance soybean maturity classification results. It showed a computation speed of 11.770 s per 1000 images and overall accuracies for fields F1 and F2 of 99.19% and 86.26%, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Dessai, Amita, and Hassanali Virani. "Emotion Classification Based on CWT of ECG and GSR Signals Using Various CNN Models." Electronics 12, no. 13 (June 24, 2023): 2795. http://dx.doi.org/10.3390/electronics12132795.

Повний текст джерела
Анотація:
Emotions expressed by humans can be identified from facial expressions, speech signals, or physiological signals. Among them, the use of physiological signals for emotion classification is a notable emerging area of research. In emotion recognition, a person’s electrocardiogram (ECG) and galvanic skin response (GSR) signals cannot be manipulated, unlike facial and voice signals. Moreover, wearables such as smartwatches and wristbands enable the detection of emotions in people’s naturalistic environment. During the COVID-19 pandemic, it was necessary to detect people’s emotions in order to ensure that appropriate actions were taken according to the prevailing situation and achieve societal balance. Experimentally, the duration of the emotion stimulus period and the social and non-social contexts of participants influence the emotion classification process. Hence, classification of emotions when participants are exposed to the elicitation process for a longer duration and taking into consideration the social context needs to be explored. This work explores the classification of emotions using five pretrained convolutional neural network (CNN) models: MobileNet, NASNetMobile, DenseNet 201, InceptionResnetV2, and EfficientNetB7. The continuous wavelet transform (CWT) coefficients were detected from ECG and GSR recordings from the AMIGOS database with suitable filtering. Scalograms of the sum of frequency coefficients versus time were obtained and converted into images. Emotions were classified using the pre-trained CNN models. The valence and arousal emotion classification accuracy obtained using ECG and GSR data were, respectively, 91.27% and 91.45% using the InceptionResnetV2 CNN classifier and 99.19% and 98.39% using the MobileNet CNN classifier. Other studies have not explored the use of scalograms to represent ECG and GSR CWT features for emotion classification using deep learning models. Additionally, this study provides a novel classification of emotions built on individual and group settings using ECG data. When the participants watched long-duration emotion elicitation videos individually and in groups, the accuracy was around 99.8%. MobileNet had the highest accuracy and shortest execution time. These subject-independent classification methods enable emotion classification independent of varying human behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Reza, Ahmed Wasif, Md Mahamudul Hasan, Nazla Nowrin, and Mir Moynuddin Ahmed Shibly. "Pre-trained deep learning models in automatic COVID-19 diagnosis." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 3 (June 1, 2021): 1540. http://dx.doi.org/10.11591/ijeecs.v22.i3.pp1540-1547.

Повний текст джерела
Анотація:
Coronavirus Disease (COVID-19) is a devastating pandemic in the history of mankind. It is a highly contagious flu that can spread from human to human without revealing any symptoms. For being so contagious, detecting patients with it and isolating them has become the primary concern for healthcare professionals. This study presented an alternative way to identify COVID-19 patients by doing an automatic examination of chest X-rays of the patients. To develop such an efficient system, six pre-trained deep learning models were used. Those models were: VGG16, InceptionV3, Xception, DenseNet201, InceptionResNetV2, and EfficientNetB4. Those models were developed on two open-source datasets that have chest X-rays of patients diagnosed with COVID-19. Among the models, EfficientNetB4 achieved better performances on both datasets with 96% and 97% of accuracies. The empirical results were also exemplary. This type of automated system can help us fight this dangerous virus outbreak.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Randellini, Enrico, Leonardo Rigutini, and Claudio Saccà. "Data Augmentation Techniques and Transfer Learning Approaches Applied to Facial Expressions Recognition Systems." International Journal of Artificial Intelligence & Applications 13, no. 1 (January 31, 2022): 55–72. http://dx.doi.org/10.5121/ijaia.2022.13104.

Повний текст джерела
Анотація:
The face expression is the first thing we pay attention to when we want to understand a person’s state of mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research field. In this paper, because the small size of available training datasets, we propose a novel data augmentation technique that improves the performances in the recognition task. We apply geometrical transformations and build from scratch GAN models able to generate new synthetic images for each emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with different architectures. To measure the generalization ability of the models, we apply extra-database protocol approach, namely we train models on the augmented versions of training dataset and test them on two different databases. The combination of these techniques allows to reach average accuracy values of the order of 85% for the InceptionResNetV2 model.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Saumya Salian and Sudhir Sawarkar. "Skin Lesion Classification towards Melanoma Detection Using EfficientNetB3." Advances in Technology Innovation 8, no. 1 (January 1, 2023): 59–72. http://dx.doi.org/10.46604/aiti.2023.9488.

Повний текст джерела
Анотація:
The rise of incidences of melanoma skin cancer is a global health problem. Skin cancer, if diagnosed at an early stage, enhances the chances of a patient’s survival. Building an automated and effective melanoma classification system is the need of the hour. In this paper, an automated computer-based diagnostic system for melanoma skin lesion classification is presented using fine-tuned EfficientNetB3 model over ISIC 2017 dataset. To improve classification results, an automated image pre-processing phase is incorporated in this study, it can effectively remove noise artifacts such as hair structures and ink markers from dermoscopic images. Comparative analyses of various advanced models like ResNet50, InceptionV3, InceptionResNetV2, and EfficientNetB0-B2 are conducted to corroborate the performance of the proposed model. The proposed system also addressed the issue of model overfitting and achieved a precision of 88.00%, an accuracy of 88.13%, recall of 88%, and F1-score of 88%.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ahamed, Md Faysal, Md Khalid Syfullah, Ovi Sarkar, Md Tohidul Islam, Md Nahiduzzaman, Md Rabiul Islam, Amith Khandakar, Mohamed Arselene Ayari, and Muhammad E. H. Chowdhury. "IRv2-Net: A Deep Learning Framework for Enhanced Polyp Segmentation Performance Integrating InceptionResNetV2 and UNet Architecture with Test Time Augmentation Techniques." Sensors 23, no. 18 (September 7, 2023): 7724. http://dx.doi.org/10.3390/s23187724.

Повний текст джерела
Анотація:
Colorectal polyps in the colon or rectum are precancerous growths that can lead to a more severe disease called colorectal cancer. Accurate segmentation of polyps using medical imaging data is essential for effective diagnosis. However, manual segmentation by endoscopists can be time-consuming, error-prone, and expensive, leading to a high rate of missed anomalies. To solve this problem, an automated diagnostic system based on deep learning algorithms is proposed to find polyps. The proposed IRv2-Net model is developed using the UNet architecture with a pre-trained InceptionResNetV2 encoder to extract most features from the input samples. The Test Time Augmentation (TTA) technique, which utilizes the characteristics of the original, horizontal, and vertical flips, is used to gain precise boundary information and multi-scale image features. The performance of numerous state-of-the-art (SOTA) models is compared using several metrics such as accuracy, Dice Similarity Coefficients (DSC), Intersection Over Union (IoU), precision, and recall. The proposed model is tested on the Kvasir-SEG and CVC-ClinicDB datasets, demonstrating superior performance in handling unseen real-time data. It achieves the highest area coverage in the area under the Receiver Operating Characteristic (ROC-AUC) and area under Precision-Recall (AUC-PR) curves. The model exhibits excellent qualitative testing outcomes across different types of polyps, including more oversized, smaller, over-saturated, sessile, or flat polyps, within the same dataset and across different datasets. Our approach can significantly minimize the number of missed rating difficulties. Lastly, a graphical interface is developed for producing the mask in real-time. The findings of this study have potential applications in clinical colonoscopy procedures and can serve based on further research and development.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Julianto, Afis, and Andi Sunyoto. "A performance evaluation of convolutional neural network architecture for classification of rice leaf disease." IAES International Journal of Artificial Intelligence (IJ-AI) 10, no. 4 (December 1, 2021): 1069. http://dx.doi.org/10.11591/ijai.v10.i4.pp1069-1078.

Повний текст джерела
Анотація:
<span lang="EN-US">Plant disease is a challenge in the agricultural sector, especially for rice production. Identifying diseases in rice leaves is the first step to wipe out and treat diseases to reduce crop failure. With the rapid development of the convolutional neural network (CNN), rice leaf disease can be recognized well without the help of an expert. In this research, the performance evaluation of CNN architecture will be carried out to analyze the classification of rice leaf disease images by classifying 5932 image data which are divided into 4 disease classes. The comparison of training data, validation, and testing are 60:20:20. Adam optimization with a learning rate of 0.0009 and softmax activation was used in this study. From the experimental results, the InceptionV3 and InceptionResnetV2 architectures got the best accuracy, namely 100%, ResNet50 and DenseNet201 got 99.83%, MobileNet 99.33%, and EfficientNetB3 90.14% accuracy.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Ismail, Aya, Marwa Elpeltagy, Mervat S. Zaki, and Kamal Eldahshan. "A New Deep Learning-Based Methodology for Video Deepfake Detection Using XGBoost." Sensors 21, no. 16 (August 10, 2021): 5413. http://dx.doi.org/10.3390/s21165413.

Повний текст джерела
Анотація:
Currently, face-swapping deepfake techniques are widely spread, generating a significant number of highly realistic fake videos that threaten the privacy of people and countries. Due to their devastating impacts on the world, distinguishing between real and deepfake videos has become a fundamental issue. This paper presents a new deepfake detection method: you only look once–convolutional neural network–extreme gradient boosting (YOLO-CNN-XGBoost). The YOLO face detector is employed to extract the face area from video frames, while the InceptionResNetV2 CNN is utilized to extract features from these faces. These features are fed into the XGBoost that works as a recognizer on the top level of the CNN network. The proposed method achieves 90.62% of an area under the receiver operating characteristic curve (AUC), 90.73% accuracy, 93.53% specificity, 85.39% sensitivity, 85.39% recall, 87.36% precision, and 86.36% F1-measure on the CelebDF-FaceForencics++ (c23) merged dataset. The experimental study confirms the superiority of the presented method as compared to the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Huang, Kai, Xiaoyu He, Zhentao Jin, Lisha Wu, Xinyu Zhao, Zhe Wu, Xian Wu, et al. "Assistant Diagnosis of Basal Cell Carcinoma and Seborrheic Keratosis in Chinese Population Using Convolutional Neural Network." Journal of Healthcare Engineering 2020 (August 3, 2020): 1–8. http://dx.doi.org/10.1155/2020/1713904.

Повний текст джерела
Анотація:
Objectives. To evaluate CNN models’ performance of identifying the clinical images of basal cell carcinoma (BCC) and seborrheic keratosis (SK) and to compare their performance with that of dermatologists. Methods. We constructed a Chinese skin diseases dataset which includes 1456 BCC and 1843 SK clinical images and the corresponding medical history. We evaluated the performance using four mainstream CNN structures and transfer learning techniques. We explored the interpretability of the CNN model and compared its performance with that of 21 dermatologists. Results. The fine-tuned InceptionResNetV2 achieved the best performance, with an accuracy and area under the curve of 0.855 and 0.919, respectively. Further experimental results suggested that the CNN model was not only interpretable but also had a performance comparable to that of dermatologists. Conclusions. This study is the first on the assistant diagnosis of BCC and SK based on the proposed dataset. The promising results suggested that CNN model’s performance was comparable to that of expert dermatologists.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Cheng, Wen-Chang, Hung-Chou Hsiao, and Li-Hua Li. "Deep Learning Mask Face Recognition with Annealing Mechanism." Applied Sciences 13, no. 2 (January 4, 2023): 732. http://dx.doi.org/10.3390/app13020732.

Повний текст джерела
Анотація:
Face recognition (FR) has matured with deep learning, but due to the COVID-19 epidemic, people need to wear masks outside to reduce the risk of infection, making FR a challenge. This study uses the FaceNet approach combined with transfer learning using three different sizes of validated CNN architectures: InceptionResNetV2, InceptionV3, and MobileNetV2. With the addition of the cosine annealing (CA) mechanism, the optimizer can automatically adjust the learning rate (LR) during the model training process to improve the efficiency of the model in finding the best solution in the global domain. The mask face recognition (MFR) method is accomplished without increasing the computational complexity using existing methods. Experimentally, the three models of different sizes using the CA mechanism have a better performance than the fixed LR, step and exponential methods. The accuracy of the three models of different sizes using the CA mechanism can reach a practical level at about 93%.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Fang, Xin, Tong Zhen, and Zhihui Li. "Lightweight Multiscale CNN Model for Wheat Disease Detection." Applied Sciences 13, no. 9 (May 8, 2023): 5801. http://dx.doi.org/10.3390/app13095801.

Повний текст джерела
Анотація:
Wheat disease detection is crucial for disease diagnosis, pesticide application optimization, disease control, and wheat yield and quality improvement. However, the detection of wheat diseases is difficult due to their various types. Detecting wheat diseases in complex fields is also challenging. Traditional models are difficult to apply to mobile devices because they have large parameters, and high computation and resource requirements. To address these issues, this paper combines the residual module and the inception module to construct a lightweight multiscale CNN model, which introduces the CBAM and ECA modules into the residual block, enhances the model’s attention to diseases, and reduces the influence of complex backgrounds on disease recognition. The proposed method has an accuracy rate of 98.7% on the test dataset, which is higher than classic convolutional neural networks such as AlexNet, VGG16, and InceptionresnetV2 and lightweight models such as MobileNetV3 and EfficientNetb0. The proposed model has superior performance and can be applied to mobile terminals to quickly identify wheat diseases.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Mondhe, Parag Jayant, Manisha P. Satone, and Namrata N. Wasatkar. "Generating captions in English and Marathi language for describing health of cotton plant." Indonesian Journal of Electrical Engineering and Computer Science 32, no. 1 (October 1, 2023): 571. http://dx.doi.org/10.11591/ijeecs.v32.i1.pp571-578.

Повний текст джерела
Анотація:
<span>Humans’ basic needs include food, shelter, and clothing. Cotton is the foundation of the textile industry. It is also one of the most profitable non-food crops for farmers around the world. Different diseases have a significant impact on cotton yield. Cotton plant leaves are adversely affected by aphids, army worms, bacterial blight, powdery mildew, and target spots. This paper proposes an encoder decoder model for generating captions in English and Marathi language to describe health of cotton plant from aerial images. The cotton disease captions dataset (CDCD) was developed to assess the effectiveness of the proposed approach. Experiments were conducted using various convolutional neural network (CNN) models, such as VGG-19, InceptionResNetV2, and EfficientNetV2L. The quality of generated caption is evaluated on BiLingual evaluation understudy (BLEU) metrics and using subjective criteria. The results obtained for captions generated in English and Marathi language are comparable. The network combination of EfficientNetV2L and long short-term memory (LSTM) has outperformed the other combinations.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Iskanderani, Ahmed I., Ibrahim M. Mehedi, Abdulah Jeza Aljohani, Mohammad Shorfuzzaman, Farzana Akther, Thangam Palaniswamy, Shaikh Abdul Latif, Abdul Latif, and Aftab Alam. "Artificial Intelligence and Medical Internet of Things Framework for Diagnosis of Coronavirus Suspected Cases." Journal of Healthcare Engineering 2021 (May 28, 2021): 1–7. http://dx.doi.org/10.1155/2021/3277988.

Повний текст джерела
Анотація:
The world has been facing the COVID-19 pandemic since December 2019. Timely and efficient diagnosis of COVID-19 suspected patients plays a significant role in medical treatment. The deep transfer learning-based automated COVID-19 diagnosis on chest X-ray is required to counter the COVID-19 outbreak. This work proposes a real-time Internet of Things (IoT) framework for early diagnosis of suspected COVID-19 patients by using ensemble deep transfer learning. The proposed framework offers real-time communication and diagnosis of COVID-19 suspected cases. The proposed IoT framework ensembles four deep learning models such as InceptionResNetV2, ResNet152V2, VGG16, and DenseNet201. The medical sensors are utilized to obtain the chest X-ray modalities and diagnose the infection by using the deep ensemble model stored on the cloud server. The proposed deep ensemble model is compared with six well-known transfer learning models over the chest X-ray dataset. Comparative analysis revealed that the proposed model can help radiologists to efficiently and timely diagnose the COVID-19 suspected patients.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Ahmed, Tawsin Uddin, Mohammad Newaj Jamil, Mohammad Shahadat Hossain, Raihan Ul Islam, and Karl Andersson. "An Integrated Deep Learning and Belief Rule Base Intelligent System to Predict Survival of COVID-19 Patient under Uncertainty." Cognitive Computation 14, no. 2 (December 16, 2021): 660–76. http://dx.doi.org/10.1007/s12559-021-09978-8.

Повний текст джерела
Анотація:
AbstractThe novel Coronavirus-induced disease COVID-19 is the biggest threat to human health at the present time, and due to the transmission ability of this virus via its conveyor, it is spreading rapidly in almost every corner of the globe. The unification of medical and IT experts is required to bring this outbreak under control. In this research, an integration of both data and knowledge-driven approaches in a single framework is proposed to assess the survival probability of a COVID-19 patient. Several neural networks pre-trained models: Xception, InceptionResNetV2, and VGG Net, are trained on X-ray images of COVID-19 patients to distinguish between critical and non-critical patients. This prediction result, along with eight other significant risk factors associated with COVID-19 patients, is analyzed with a knowledge-driven belief rule-based expert system which forms a probability of survival for that particular patient. The reliability of the proposed integrated system has been tested by using real patient data and compared with expert opinion, where the performance of the system is found promising.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Kensert, Alexander, Philip J. Harrison, and Ola Spjuth. "Transfer Learning with Deep Convolutional Neural Networks for Classifying Cellular Morphological Changes." SLAS DISCOVERY: Advancing the Science of Drug Discovery 24, no. 4 (January 14, 2019): 466–75. http://dx.doi.org/10.1177/2472555218818756.

Повний текст джерела
Анотація:
The quantification and identification of cellular phenotypes from high-content microscopy images has proven to be very useful for understanding biological activity in response to different drug treatments. The traditional approach has been to use classical image analysis to quantify changes in cell morphology, which requires several nontrivial and independent analysis steps. Recently, convolutional neural networks have emerged as a compelling alternative, offering good predictive performance and the possibility to replace traditional workflows with a single network architecture. In this study, we applied the pretrained deep convolutional neural networks ResNet50, InceptionV3, and InceptionResnetV2 to predict cell mechanisms of action in response to chemical perturbations for two cell profiling datasets from the Broad Bioimage Benchmark Collection. These networks were pretrained on ImageNet, enabling much quicker model training. We obtain higher predictive accuracy than previously reported, between 95% and 97%. The ability to quickly and accurately distinguish between different cell morphologies from a scarce amount of labeled data illustrates the combined benefit of transfer learning and deep convolutional neural networks for interrogating cell-based images.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sarker, Md Mostafa Kamal, Farhan Akram, Mohammad Alsharid, Vivek Kumar Singh, Robail Yasrab, and Eyad Elyan. "Efficient Breast Cancer Classification Network with Dual Squeeze and Excitation in Histopathological Images." Diagnostics 13, no. 1 (December 29, 2022): 103. http://dx.doi.org/10.3390/diagnostics13010103.

Повний текст джерела
Анотація:
Medical image analysis methods for mammograms, ultrasound, and magnetic resonance imaging (MRI) cannot provide the underline features on the cellular level to understand the cancer microenvironment which makes them unsuitable for breast cancer subtype classification study. In this paper, we propose a convolutional neural network (CNN)-based breast cancer classification method for hematoxylin and eosin (H&E) whole slide images (WSIs). The proposed method incorporates fused mobile inverted bottleneck convolutions (FMB-Conv) and mobile inverted bottleneck convolutions (MBConv) with a dual squeeze and excitation (DSE) network to accurately classify breast cancer tissue into binary (benign and malignant) and eight subtypes using histopathology images. For that, a pre-trained EfficientNetV2 network is used as a backbone with a modified DSE block that combines the spatial and channel-wise squeeze and excitation layers to highlight important low-level and high-level abstract features. Our method outperformed ResNet101, InceptionResNetV2, and EfficientNetV2 networks on the publicly available BreakHis dataset for the binary and multi-class breast cancer classification in terms of precision, recall, and F1-score on multiple magnification levels.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Dzierżak, Róża, and Zbigniew Omiotek. "Application of Deep Convolutional Neural Networks in the Diagnosis of Osteoporosis." Sensors 22, no. 21 (October 26, 2022): 8189. http://dx.doi.org/10.3390/s22218189.

Повний текст джерела
Анотація:
The aim of this study was to assess the possibility of using deep convolutional neural networks (DCNNs) to develop an effective method for diagnosing osteoporosis based on CT images of the spine. The research material included the CT images of L1 spongy tissue belonging to 100 patients (50 healthy and 50 diagnosed with osteoporosis). Six pre-trained DCNN architectures with different topological depths (VGG16, VGG19, MobileNetV2, Xception, ResNet50, and InceptionResNetV2) were used in the study. The best results were obtained for the VGG16 model characterised by the lowest topological depth (ACC = 95%, TPR = 96%, and TNR = 94%). A specific challenge during the study was the relatively small (for deep learning) number of observations (400 images). This problem was solved using DCNN models pre-trained on a large dataset and a data augmentation technique. The obtained results allow us to conclude that the transfer learning technique yields satisfactory results during the construction of deep models for the diagnosis of osteoporosis based on small datasets of CT images of the spine.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Gómez-Guzmán, Marco Antonio, Laura Jiménez-Beristaín, Enrique Efren García-Guerrero, Oscar Roberto López-Bonilla, Ulises Jesús Tamayo-Perez, José Jaime Esqueda-Elizondo, Kenia Palomino-Vizcaino, and Everardo Inzunza-González. "Classifying Brain Tumors on Magnetic Resonance Imaging by Using Convolutional Neural Networks." Electronics 12, no. 4 (February 14, 2023): 955. http://dx.doi.org/10.3390/electronics12040955.

Повний текст джерела
Анотація:
The study of neuroimaging is a very important tool in the diagnosis of central nervous system tumors. This paper presents the evaluation of seven deep convolutional neural network (CNN) models for the task of brain tumor classification. A generic CNN model is implemented and six pre-trained CNN models are studied. For this proposal, the dataset utilized in this paper is Msoud, which includes Fighshare, SARTAJ, and Br35H datasets, containing 7023 MRI images. The magnetic resonance imaging (MRI) in the dataset belongs to four classes, three brain tumors, including Glioma, Meningioma, and Pituitary, and one class of healthy brains. The models are trained with input MRI images with several preprocessing strategies applied in this paper. The CNN models evaluated are Generic CNN, ResNet50, InceptionV3, InceptionResNetV2, Xception, MobileNetV2, and EfficientNetB0. In the comparison of all CNN models, including a generic CNN and six pre-trained models, the best CNN model for this dataset was InceptionV3, which obtained an average Accuracy of 97.12%. The development of these techniques could help clinicians specializing in the early detection of brain tumors.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Velasco, Jessica S., Jomer V. Catipon, Edmund G. Monilar, Villamor M. Amon, Glenn C. Virrey, and Lean Karlo S. Tolentino. "Classification of Skin Disease Using Transfer Learning in Convolutional Neural Networks." International Journal of Emerging Technology and Advanced Engineering 13, no. 4 (April 5, 2023): 1–7. http://dx.doi.org/10.46338/ijetae0423_01.

Повний текст джерела
Анотація:
Automatic classification of skin disease plays an important role in healthcare especially in dermatology. Dermatologists can determine different skin diseases with the help of an android device and with the use of Artificial Intelligence. Deep learning requires a lot of time to train due to the number of sequential layers and input data involved. Powerful computer involving a Graphic Processing Unit is an ideal approach to the training process due to its parallel processing capability. This study gathered images of 7 types of skin disease prevalent in the Philippines for a skin disease classification system. There are 3400 images composed of different skin diseases like chicken pox, acne, eczema, Pityriasis rosea, psoriasis, Tinea corporis and vitiligo that was used for training and testing of different convolutional network models. This study used transfer learning to skin disease classification using pre-trained weights from different convolutional neural network models such as VGG16, VGG19, MobileNet, ResNet50, InceptionV3, InceptionResNetV2, Xception, DenseNet121, DenseNet169, DenseNet201 and NASNet mobile. The MobileNet model achieved the highest accuracy, 94.1% and the VGG16 model achieved the lowest accuracy, 44.1%.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії