Gotowa bibliografia na temat „InceptionResNetV2”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „InceptionResNetV2”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "InceptionResNetV2"

1

Ullah, Naeem, Javed Ali Khan, Mohammad Sohail Khan, Wahab Khan, Izaz Hassan, Marwa Obayya, Noha Negm i Ahmed S. Salama. "An Effective Approach to Detect and Identify Brain Tumors Using Transfer Learning". Applied Sciences 12, nr 11 (2.06.2022): 5645. http://dx.doi.org/10.3390/app12115645.

Pełny tekst źródła
Streszczenie:
Brain tumors are considered one of the most serious, prominent and life-threatening diseases globally. Brain tumors cause thousands of deaths every year around the globe because of the rapid growth of tumor cells. Therefore, timely analysis and automatic detection of brain tumors are required to save the lives of thousands of people around the globe. Recently, deep transfer learning (TL) approaches are most widely used to detect and classify the three most prominent types of brain tumors, i.e., glioma, meningioma and pituitary. For this purpose, we employ state-of-the-art pre-trained TL techniques to identify and detect glioma, meningioma and pituitary brain tumors. The aim is to identify the performance of nine pre-trained TL classifiers, i.e., Inceptionresnetv2, Inceptionv3, Xception, Resnet18, Resnet50, Resnet101, Shufflenet, Densenet201 and Mobilenetv2, by automatically identifying and detecting brain tumors using a fine-grained classification approach. For this, the TL algorithms are evaluated on a baseline brain tumor classification (MRI) dataset, which is freely available on Kaggle. Additionally, all deep learning (DL) models are fine-tuned with their default values. The fine-grained classification experiment demonstrates that the inceptionresnetv2 TL algorithm performs better and achieves the highest accuracy in detecting and classifying glioma, meningioma and pituitary brain tumors, and hence it can be classified as the best classification algorithm. We achieve 98.91% accuracy, 98.28% precision, 99.75% recall and 99% F-measure values with the inceptionresnetv2 TL algorithm, which out-performs the other DL algorithms. Additionally, to ensure and validate the performance of TL classifiers, we compare the efficacy of the inceptionresnetv2 TL algorithm with hybrid approaches, in which we use convolutional neural networks (CNN) for deep feature extraction and a Support Vector Machine (SVM) for classification. Similarly, the experiment’s results show that TL algorithms, and inceptionresnetv2 in particular, out-perform the state-of-the-art DL algorithms in classifying brain MRI images into glioma, meningioma, and pituitary. The hybrid DL approaches used in the experiments are Mobilnetv2, Densenet201, Squeeznet, Alexnet, Googlenet, Inceptionv3, Resnet50, Resnet18, Resnet101, Xception, Inceptionresnetv3, VGG19 and Shufflenet.
Style APA, Harvard, Vancouver, ISO itp.
2

Yazid Aufar, Muhammad Helmy Abdillah i Jiki Romadoni. "Web-based CNN Application for Arabica Coffee Leaf Disease Prediction in Smart Agriculture". Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 7, nr 1 (2.02.2023): 71–79. http://dx.doi.org/10.29207/resti.v7i1.4622.

Pełny tekst źródła
Streszczenie:
In the agriculture industry, plant diseases provide difficulty, particularly for Arabica coffee production. A first step in eliminating and treating infections to avoid crop damage is recognizing ailments on Arabica coffee leaves. Convolutional neural networks (CNN) are rapidly advancing, making it possible to diagnose Arabica coffee leaf damage without a specialist's help. CNN is aimed to find features adaptively through backpropagation by adding layers including convolutional layers and pooling layers. This study aims to optimize and increase the accuracy of Arabica coffee leaf disease classification utilizing the neural network architectures: ResNet50, InceptionResNetV4, MobileNetV2, and DensNet169. Additionally, this research presents an interactive web platform integrated with the Arabica coffee leaf disease prediction system. Inside this research, 5000 image data points will be divided into five classes—Phoma, Rust, Cescospora, healthy, and Miner—to assess the efficacy of CNN architecture in classifying images of Arabica coffee leaf disease. 80:10:10 is the ratio between training data, validation, and testing. In the testing findings, the InceptionResnetV2 and DensNet169 designs had the highest accuracy, at 100%, followed by the MobileNetV2 architecture at 99% and the ResNet50 architecture at 59%. Even though MobileNetV2 is not more accurate than InceptionResnetV2 and DensNet169, MobileNetV2 is the smallest of the three models. The MobileNetV2 paradigm was chosen for web application development. The system accurately identified and advised treatment for Arabica coffee leaf diseases, as shown by the system's implementation outcomes.
Style APA, Harvard, Vancouver, ISO itp.
3

Jiang, Kaiyuan, Jiawei Zhang, Haibin Wu, Aili Wang i Yuji Iwahori. "A Novel Digital Modulation Recognition Algorithm Based on Deep Convolutional Neural Network". Applied Sciences 10, nr 3 (9.02.2020): 1166. http://dx.doi.org/10.3390/app10031166.

Pełny tekst źródła
Streszczenie:
The modulation recognition of digital signals under non-cooperative conditions is one of the important research contents here. With the rapid development of artificial intelligence technology, deep learning theory is also increasingly being applied to the field of modulation recognition. In this paper, a novel digital signal modulation recognition algorithm is proposed, which has combined the InceptionResNetV2 network with transfer adaptation, called InceptionResnetV2-TA. Firstly, the received signal is preprocessed and generated the constellation diagram. Then, the constellation diagram is used as the input of the InceptionResNetV2 network to identify different kinds of signals. Transfer adaptation is used for feature extraction and SVM classifier is used to identify the modulation mode of digital signal. The constellation diagram of three typical signals, including Binary Phase Shift Keying(BPSK), Quadrature Phase Shift Keying(QPSK) and 8 Phase Shift Keying(8PSK), was made for the experiments. When the signal-to-noise ratio(SNR) is 4dB, the recognition rates of BPSK, QPSK and 8PSK are respectively 1.0, 0.9966 and 0.9633 obtained by InceptionResnetV2-TA, and at the same time, the recognition rate can be 3% higher than other algorithms. Compared with the traditional modulation recognition algorithms, the experimental results show that the proposed algorithm in this paper has a higher accuracy rate for digital signal modulation recognition at low SNR.
Style APA, Harvard, Vancouver, ISO itp.
4

Faruk, Omar, Eshan Ahmed, Sakil Ahmed, Anika Tabassum, Tahia Tazin, Sami Bourouis i Mohammad Monirujjaman Khan. "A Novel and Robust Approach to Detect Tuberculosis Using Transfer Learning". Journal of Healthcare Engineering 2021 (25.11.2021): 1–10. http://dx.doi.org/10.1155/2021/1002799.

Pełny tekst źródła
Streszczenie:
Deep learning has emerged as a promising technique for a variety of elements of infectious disease monitoring and detection, including tuberculosis. We built a deep convolutional neural network (CNN) model to assess the generalizability of the deep learning model using a publicly accessible tuberculosis dataset. This study was able to reliably detect tuberculosis (TB) from chest X-ray images by utilizing image preprocessing, data augmentation, and deep learning classification techniques. Four distinct deep CNNs (Xception, InceptionV3, InceptionResNetV2, and MobileNetV2) were trained, validated, and evaluated for the classification of tuberculosis and nontuberculosis cases using transfer learning from their pretrained starting weights. With an F1-score of 99 percent, InceptionResNetV2 had the highest accuracy. This research is more accurate than earlier published work. Additionally, it outperforms all other models in terms of reliability. The suggested approach, with its state-of-the-art performance, may be helpful for computer-assisted rapid TB detection.
Style APA, Harvard, Vancouver, ISO itp.
5

Al-Timemy, Ali H., Laith Alzubaidi, Zahraa M. Mosa, Hazem Abdelmotaal, Nebras H. Ghaeb, Alexandru Lavric, Rossen M. Hazarbassanov, Hidenori Takahashi, Yuantong Gu i Siamak Yousefi. "A Deep Feature Fusion of Improved Suspected Keratoconus Detection with Deep Learning". Diagnostics 13, nr 10 (10.05.2023): 1689. http://dx.doi.org/10.3390/diagnostics13101689.

Pełny tekst źródła
Streszczenie:
Detection of early clinical keratoconus (KCN) is a challenging task, even for expert clinicians. In this study, we propose a deep learning (DL) model to address this challenge. We first used Xception and InceptionResNetV2 DL architectures to extract features from three different corneal maps collected from 1371 eyes examined in an eye clinic in Egypt. We then fused features using Xception and InceptionResNetV2 to detect subclinical forms of KCN more accurately and robustly. We obtained an area under the receiver operating characteristic curves (AUC) of 0.99 and an accuracy range of 97–100% to distinguish normal eyes from eyes with subclinical and established KCN. We further validated the model based on an independent dataset with 213 eyes examined in Iraq and obtained AUCs of 0.91–0.92 and an accuracy range of 88–92%. The proposed model is a step toward improving the detection of clinical and subclinical forms of KCN.
Style APA, Harvard, Vancouver, ISO itp.
6

Cheng, Wen-Chang, Hung-Chou Hsiao, Yung-Fa Huang i Li-Hua Li. "Combining Classifiers for Deep Learning Mask Face Recognition". Information 14, nr 7 (21.07.2023): 421. http://dx.doi.org/10.3390/info14070421.

Pełny tekst źródła
Streszczenie:
This research proposes a single network model architecture for mask face recognition using the FaceNet training method. Three pre-trained convolutional neural networks of different sizes are combined, namely InceptionResNetV2, InceptionV3, and MobileNetV2. The models are augmented by connecting an otherwise fully connected network with a SoftMax output layer. We combine triplet loss and categorical cross-entropy loss to optimize the training process. In addition, the learning rate of the optimizer is dynamically updated using the cosine annealing mechanism, which improves the convergence of the model during training. Mask face recognition (MFR) experimental results on a custom MASK600 dataset show that proposed InceptionResNetV2 and InceptionV3 use only 20 training epochs, and MobileNetV2 uses only 50 training epochs, but to achieve more than 93% accuracy than the previous works of MFR with annealing. In addition to reaching a practical level, it saves time for training models and effectively reduces energy costs.
Style APA, Harvard, Vancouver, ISO itp.
7

Mahdianpari, Masoud, Bahram Salehi, Mohammad Rezaee, Fariba Mohammadimanesh i Yun Zhang. "Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery". Remote Sensing 10, nr 7 (14.07.2018): 1119. http://dx.doi.org/10.3390/rs10071119.

Pełny tekst źródła
Streszczenie:
Despite recent advances of deep Convolutional Neural Networks (CNNs) in various computer vision tasks, their potential for classification of multispectral remote sensing images has not been thoroughly explored. In particular, the applications of deep CNNs using optical remote sensing data have focused on the classification of very high-resolution aerial and satellite data, owing to the similarity of these data to the large datasets in computer vision. Accordingly, this study presents a detailed investigation of state-of-the-art deep learning tools for classification of complex wetland classes using multispectral RapidEye optical imagery. Specifically, we examine the capacity of seven well-known deep convnets, namely DenseNet121, InceptionV3, VGG16, VGG19, Xception, ResNet50, and InceptionResNetV2, for wetland mapping in Canada. In addition, the classification results obtained from deep CNNs are compared with those based on conventional machine learning tools, including Random Forest and Support Vector Machine, to further evaluate the efficiency of the former to classify wetlands. The results illustrate that the full-training of convnets using five spectral bands outperforms the other strategies for all convnets. InceptionResNetV2, ResNet50, and Xception are distinguished as the top three convnets, providing state-of-the-art classification accuracies of 96.17%, 94.81%, and 93.57%, respectively. The classification accuracies obtained using Support Vector Machine (SVM) and Random Forest (RF) are 74.89% and 76.08%, respectively, considerably inferior relative to CNNs. Importantly, InceptionResNetV2 is consistently found to be superior compared to all other convnets, suggesting the integration of Inception and ResNet modules is an efficient architecture for classifying complex remote sensing scenes such as wetlands.
Style APA, Harvard, Vancouver, ISO itp.
8

Pohtongkam, Somchai, i Jakkree Srinonchat. "Tactile Object Recognition for Humanoid Robots Using New Designed Piezoresistive Tactile Sensor and DCNN". Sensors 21, nr 18 (8.09.2021): 6024. http://dx.doi.org/10.3390/s21186024.

Pełny tekst źródła
Streszczenie:
A tactile sensor array is a crucial component for applying physical sensors to a humanoid robot. This work focused on developing a palm-size tactile sensor array (56.0 mm × 56.0 mm) to apply object recognition for the humanoid robot hand. This sensor was based on a PCB technology operating with the piezoresistive principle. A conductive polymer composites sheet was used as a sensing element and the matrix array of this sensor was 16 × 16 pixels. The sensitivity of this sensor was evaluated and the sensor was installed on the robot hand. The tactile images, with resolution enhancement using bicubic interpolation obtained from 20 classes, were used to train and test 19 different DCNNs. InceptionResNetV2 provided superior performance with 91.82% accuracy. However, using the multimodal learning method that included InceptionResNetV2 and XceptionNet, the highest recognition rate of 92.73% was achieved. Moreover, this recognition rate improved when the object exploration was applied to demonstrate.
Style APA, Harvard, Vancouver, ISO itp.
9

Mondal, M. Rubaiyat Hossain, Subrato Bharati i Prajoy Podder. "CO-IRv2: Optimized InceptionResNetV2 for COVID-19 detection from chest CT images". PLOS ONE 16, nr 10 (28.10.2021): e0259179. http://dx.doi.org/10.1371/journal.pone.0259179.

Pełny tekst źródła
Streszczenie:
This paper focuses on the application of deep learning (DL) in the diagnosis of coronavirus disease (COVID-19). The novelty of this work is in the introduction of optimized InceptionResNetV2 for COVID-19 (CO-IRv2) method. A part of the CO-IRv2 scheme is derived from the concepts of InceptionNet and ResNet with hyperparameter tuning, while the remaining part is a new architecture consisting of a global average pooling layer, batch normalization, dense layers, and dropout layers. The proposed CO-IRv2 is applied to a new dataset of 2481 computed tomography (CT) images formed by collecting two independent datasets. Data resizing and normalization are performed, and the evaluation is run up to 25 epochs. Various performance metrics, including precision, recall, accuracy, F1-score, area under the receiver operating characteristics (AUC) curve are used as performance metrics. The effectiveness of three optimizers known as Adam, Nadam and RMSProp are evaluated in classifying suspected COVID-19 patients and normal people. Results show that for CO-IRv2 and for CT images, the obtained accuracies of Adam, Nadam and RMSProp optimizers are 94.97%, 96.18% and 96.18%, respectively. Furthermore, it is shown here that for the case of CT images, CO-IRv2 with Nadam optimizer has better performance than existing DL algorithms in the diagnosis of COVID-19 patients. Finally, CO-IRv2 is applied to an X-ray dataset of 1662 images resulting in a classification accuracy of 99.40%.
Style APA, Harvard, Vancouver, ISO itp.
10

Angurala, Mohit. "Augmented MRI Images for Classification of Normal and Tumors Brain through Transfer Learning Techniques". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 5s (10.06.2023): 536–42. http://dx.doi.org/10.17762/ijritcc.v11i5s.7130.

Pełny tekst źródła
Streszczenie:
A brain tumor is a severe malignant condition caused by uncontrolled and abnormal cell division. Recent advances in deep learning have aided the health business in Medical Imaging for the diagnosis of numerous disorders. The most frequent and widely used deep learning algorithm for visual learning and image recognition. This research seeks to multi-classification tumors in the brain from images attained by Magnetic Resonance Imaging (MRI) using deep learning models that have been pre-trained for transfer learning. As per the publicly available MRI brain tumor dataset, brain tumors identified as glioma, meningioma, and pituitary, are accounting for most brain tumors. To ensure the robustness of the suggested method, data acquisition, and preprocessing are performed in the first step followed by data augmentation. Finally, Transfer Learning algorithms including DenseNet, ResNetV2, and InceptionResNetv2 have been applied to find out the optimum algorithm based on various parameters including accuracy, precision, and recall, and are under the curve (AUC). The experimental outcomes show that the model’s validation accuracy is high for DenseNet (about 97%), while ResNetv2 and InceptionResNetv2 achieved 77% and 80% only.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "InceptionResNetV2"

1

Ganesh, Mukkesh, Sanjana Dulam i Pattabiraman Venkatasubbu. "Diabetic Retinopathy Diagnosis with InceptionResNetV2, Xception, and EfficientNetB3". W Artificial Intelligence and Technologies, 405–13. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6448-9_41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Sharma, Osho, Akashdeep Sharma i Arvind Kalia. "Windows Malware Hunting with InceptionResNetv2 Assisted Malware Visualization Approach". W Proceedings of International Conference on Computational Intelligence and Data Engineering, 171–88. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0609-3_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Vodnala, Deepika, Konkathi Shreya, Maduru Sandhya i Cholleti Varsha. "Skin Cancer Detection Using Convolutional Neural Networks and InceptionResNetV2". W Algorithms for Intelligent Systems, 595–604. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7041-2_50.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Akhand, Md Nafis Tahmid, Sunanda Das i Mahmudul Hasan. "Traffic Density Estimation Using Transfer Learning with Pre-trained InceptionResNetV2 Network". W Machine Intelligence and Data Science Applications, 363–75. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2347-0_28.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Simon, Philomina, i V. Uma. "Integrating InceptionResNetv2 Model and Machine Learning Classifiers for Food Texture Classification". W Advances in Cognitive Science and Communications, 531–39. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8086-2_51.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Singh, Rahul, Avinash Sharma, Neha Sharma, Kulbhushan Sharma i Rupesh Gupta. "A Deep Learning-Based InceptionResNet V2 Model for Cassava Leaf Disease Detection". W Emerging Trends in Expert Applications and Security, 423–32. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1946-8_38.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Muralikrishnan, Madhuvanti, i R. Anitha. "Comparison of Breast Cancer Multi-class Classification Accuracy Based on Inception and InceptionResNet Architecture". W Emerging Trends in Computing and Expert Technology, 1155–62. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32150-5_118.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Sharma, Aditya, Arshdeep Singh Chudey i Mrityunjay Singh. "COVID-19 Detection Using Chest X-Ray and Transfer Learning". W Handbook of Research on Machine Learning Techniques for Pattern Recognition and Information Security, 171–86. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3299-7.ch011.

Pełny tekst źródła
Streszczenie:
The novel coronavirus (COVID-19), which started in the Wuhan province of China, prompted a major outbreak that culminated in a worldwide pandemic. Several cases are being recorded across the globe, with deaths being close to 2.5 million. The increased number of cases and the newness of such a pandemic has resulted in the hospitals being under-equipped leading to problems in diagnosis of the disease. From previous studies, radiography has proved to be the fastest testing method. A screening test using the x-ray scan of the chest region has proved to be effective. For this method, a trained radiologist is needed to detect the disease. Automating this process using deep learning models can prove to be effective. Due to the lack of large dataset, pre-trained CNN models are used in this study. Several models have been employed like VGG-16, Resnet-50, InceptionV3, and InceptionResnetV2. Resnet-50 provided the best accuracy of 98.3%. The performance evaluation has been done using metrics like receiver operating curve and confusion matrix.
Style APA, Harvard, Vancouver, ISO itp.
9

Kalinathan, Lekshmi, Deepika Sivasankaran, Janet Reshma Jeyasingh, Amritha Sennappa Sudharsan i Hareni Marimuthu. "Classification of Hepatocellular Carcinoma Using Machine Learning". W Hepatocellular Carcinoma - Challenges and Opportunities of a Multidisciplinary Approach [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.99841.

Pełny tekst źródła
Streszczenie:
Hepatocellular Carcinoma (HCC) proves to be challenging for detection and classification of its stages mainly due to the lack of disparity between cancerous and non cancerous cells. This work focuses on detecting hepatic cancer stages from histopathology data using machine learning techniques. It aims to develop a prototype which helps the pathologists to deliver a report in a quick manner and detect the stage of the cancer cell. Hence we propose a system to identify and classify HCC based on the features obtained by deep learning using pre-trained models such as VGG-16, ResNet-50, DenseNet-121, InceptionV3, InceptionResNet50 and Xception followed by machine learning using support vector machine (SVM) to learn from these features. The accuracy obtained using the system comprised of DenseNet-121 for feature extraction and SVM for classification gives 82% accuracy.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "InceptionResNetV2"

1

Guefrechi, Sarra, Marwa Ben Jabra i Habib Hamam. "Deepfake video detection using InceptionResnetV2". W 2022 6th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP). IEEE, 2022. http://dx.doi.org/10.1109/atsip55956.2022.9805902.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Naveenkumar, M., S. Srithar, B. Rajesh Kumar, S. Alagumuthukrishnan i P. Baskaran. "InceptionResNetV2 for Plant Leaf Disease Classification". W 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). IEEE, 2021. http://dx.doi.org/10.1109/i-smac52330.2021.9641025.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kesiman, Made Windu Antara, Kadek Teguh Dermawan i I. Gede Mahendra Darmawiguna. "Balinese Carving Ornaments Classification Using InceptionResnetV2 Architecture". W 2022 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM). IEEE, 2022. http://dx.doi.org/10.1109/cenim56801.2022.10037265.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Jethwa, Nishant, Hamza Gabajiwala, Arundhati Mishra, Parth Joshi i Prachi Natu. "Comparative Analysis between InceptionResnetV2 and InceptionV3 for Attention based Image Captioning". W 2021 2nd Global Conference for Advancement in Technology (GCAT). IEEE, 2021. http://dx.doi.org/10.1109/gcat52182.2021.9587514.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ghadiri, Ali, Afrooz Sheikholeslami i Asiyeh Bahaloo. "Multi-label detection of ophthalmic disorders using InceptionResNetV2 on multiple datasets". W 2022 8th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS). IEEE, 2022. http://dx.doi.org/10.1109/icspis56952.2022.10043998.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Souza, Victor, Luan Silva, Adam Santos i Leandro Araújo. "Análise Comparativa de Redes Neurais Convolucionais no Reconhecimento de Cenas". W Computer on the Beach. Itajaí: Universidade do Vale do Itajaí, 2020. http://dx.doi.org/10.14210/cotb.v11n1.p419-426.

Pełny tekst źródła
Streszczenie:
This paper aims to compare the convolutional neural networks(CNNs): ResNet50, InceptionV3, and InceptionResNetV2 tested withand without pre-trained weights on the ImageNet database in orderto solve the scene recognition problem. The results showed that thepre-trained ResNet50 achieved the best performance with an averageaccuracy of 99.82% in training and 85.53% in the test, while theworst result was attributed to the ResNet50 without pre-training,with 88.76% and 71.66% of average accuracy in training and testing,respectively. The main contribution of this work is the direct comparisonbetween the CNNs widely applied in the literature, that is,to enable a better selection of the algorithms in the various scenerecognition applications.
Style APA, Harvard, Vancouver, ISO itp.
7

Randellini, Enrico, Leonardo Rigutini i Claudio Saccà. "Data Augmentation and Transfer Learning Approaches Applied to Facial Expressions Recognition". W 2nd International Conference on NLP Techniques and Applications (NLPTA 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111912.

Pełny tekst źródła
Streszczenie:
The face expression is the first thing we pay attention to when we want to understand a person’s state of mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research field. In this paper, because the small size of available training datasets, we propose a novel data augmentation technique that improves the performances in the recognition task. We apply geometrical transformations and build from scratch GAN models able to generate new synthetic images for each emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with different architectures. To measure the generalization ability of the models, we apply extra-database protocol approach, namely we train models on the augmented versions of training dataset and test them on two different databases. The combination of these techniques allows to reach average accuracy values of the order of 85% for the InceptionResNetV2 model.
Style APA, Harvard, Vancouver, ISO itp.
8

Chichanoski, Gustavo, i Maria Bernadete de Morais França. "System for Assistance in Diagnosis of Diseases Pulmonary". W 9th International Conference on Computer Science and Information Technology (CSIT 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121408.

Pełny tekst źródła
Streszczenie:
Covid-19 is caused by the SARS-COV2 virus, where most people experience a mild to moderate respiratory crisis. To assist in diagnosing and triaging patients, this work developed a Covid-19 classification system through chest radiology images. For this purpose, the neural network models ResNet50V2, ResNet101V2, DenseNet121, DenseNet169, DenseNet201, InceptionResnetV2, VGG-16, and VGG-19 were used, comparing their precision, accuracy, recall, and specificity. For this, the images were segmented by a U-Net network, and packets of the lung image were generated, which served as input for the different classification models. Finally, the probabilistic Grad-CAM was generated to assist in the interpretation of the results of the neural networks. The segmentation obtained a Jaccard similarity of 94.30%, while for the classification the parameters of precision, specificity, accuracy, and revocation were evaluated, compared with the reference literature. Where DenseNet121 obtained an accuracy of 99.28%, while ResNet50V2 presented a specificity of 99.72%, both for Covid-19.
Style APA, Harvard, Vancouver, ISO itp.
9

Zeiser, Felipe, Cristiano da Costa i Gabriel Ramos. "Convolutional Neural Networks Evaluation for COVID-19 Classification on Chest Radiographs". W LatinX in AI at International Conference on Machine Learning 2021. Journal of LatinX in AI Research, 2021. http://dx.doi.org/10.52591/lxai2021072418.

Pełny tekst źródła
Streszczenie:
Early identification of patients with COVID-19 is essential to enable adequate treatment and to reduce the burden on the health system. The gold standard for COVID-19 detection is the use of RT-PCR tests. However, due to the high demand for tests, these can take days or even weeks in some regions of Brazil. Thus, an alternative for the detection of COVID-19 is the analysis of Chest X-rays (CXR). This paper proposes the evaluation of convolutional neural networks to identify pneumonia due to COVID-19 in CXR. The proposed methodology consists of an evaluation of six convolutional architectures pre-trained with the ImageNet dataset: InceptionResNetV2, InceptionV3, MovileNetV2, ResNet50, VGG16, and Xception. The obtained results for our methodology demonstrate that the Xception architecture presented a superior performance in the classification of CXR, with an Accuracy of 85.64%, Sensitivity of 85.71%, Specificity of 85.65%, F1- score of 85.49%, and an AUC of 0.9648.
Style APA, Harvard, Vancouver, ISO itp.
10

Zeiser, Felipe, Cristiano da Costa i Gabriel de Oliveira. "Convolutional Neural Networks Evaluation for COVID-19 Classification on Chest Radiographs". W LatinX in AI at International Conference on Machine Learning 2021. Journal of LatinX in AI Research, 2021. http://dx.doi.org/10.52591/lxai2021072412.

Pełny tekst źródła
Streszczenie:
Early identification of patients with COVID-19 is essential to enable adequate treatment and to reduce the burden on the health system. The gold standard for COVID-19 detection is the use of RT-PCR tests. However, due to the high demand for tests, these can take days or even weeks in some regions of Brazil. Thus, an alternative for the detection of COVID-19 is the analysis of Chest X-rays (CXR). This paper proposes the evaluation of convolutional neural networks to identify pneumonia due to COVID-19 in CXR. The proposed methodology consists of an evaluation of six convolutional architectures pre-trained with the ImageNet dataset: InceptionResNetV2, InceptionV3, MovileNetV2, ResNet50, VGG16, and Xception. The obtained results for our methodology demonstrate that the Xception architecture presented a superior performance in the classification of CXR, with an Accuracy of 85.64%, Sensitivity of 85.71%, Specificity of 85.65%, F1-score of 85.49%, and an AUC of 0.9648.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii