Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: EfficientNet.

Zeitschriftenartikel zum Thema „EfficientNet“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "EfficientNet" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Munien, Chanaleä, und Serestina Viriri. „Classification of Hematoxylin and Eosin-Stained Breast Cancer Histology Microscopy Images Using Transfer Learning with EfficientNets“. Computational Intelligence and Neuroscience 2021 (09.04.2021): 1–17. http://dx.doi.org/10.1155/2021/5580914.

Der volle Inhalt der Quelle
Annotation:
Breast cancer is a fatal disease and is a leading cause of death in women worldwide. The process of diagnosis based on biopsy tissue is nontrivial, time-consuming, and prone to human error, and there may be conflict about the final diagnosis due to interobserver variability. Computer-aided diagnosis systems have been designed and implemented to combat these issues. These systems contribute significantly to increasing the efficiency and accuracy and reducing the cost of diagnosis. Moreover, these systems must perform better so that their determined diagnosis can be more reliable. This research investigates the application of the EfficientNet architecture for the classification of hematoxylin and eosin-stained breast cancer histology images provided by the ICIAR2018 dataset. Specifically, seven EfficientNets were fine-tuned and evaluated on their ability to classify images into four classes: normal, benign, in situ carcinoma, and invasive carcinoma. Moreover, two standard stain normalization techniques, Reinhard and Macenko, were observed to measure the impact of stain normalization on performance. The outcome of this approach reveals that the EfficientNet-B2 model yielded an accuracy and sensitivity of 98.33% using Reinhard stain normalization method on the training images and an accuracy and sensitivity of 96.67% using the Macenko stain normalization method. These satisfactory results indicate that transferring generic features from natural images to medical images through fine-tuning on EfficientNets can achieve satisfactory results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Jun, Qianying Liu, Haotian Xie, Zhaogang Yang und Hefeng Zhou. „Boosted EfficientNet: Detection of Lymph Node Metastases in Breast Cancer Using Convolutional Neural Networks“. Cancers 13, Nr. 4 (07.02.2021): 661. http://dx.doi.org/10.3390/cancers13040661.

Der volle Inhalt der Quelle
Annotation:
(1) Purpose: To improve the capability of EfficientNet, including developing a cropping method called Random Center Cropping (RCC) to retain the original image resolution and significant features on the images’ center area, reducing the downsampling scale of EfficientNet to facilitate the small resolution images of RPCam datasets, and integrating attention and Feature Fusion (FF) mechanisms with EfficientNet to obtain features containing rich semantic information. (2) Methods: We adopt the Convolutional Neural Network (CNN) to detect and classify lymph node metastasis in breast cancer. (3) Results: Experiments illustrate that our methods significantly boost performance of basic CNN architectures, where the best-performed method achieves an accuracy of 97.96% ± 0.03% and an Area Under the Curve (AUC) of 99.68% ± 0.01% on RPCam datasets, respectively. (4) Conclusions: (1) To our limited knowledge, we are the only study to explore the power of EfficientNet on Metastatic Breast Cancer (MBC) classification, and elaborate experiments are conducted to compare the performance of EfficientNet with other state-of-the-art CNN models. It might provide inspiration for researchers who are interested in image-based diagnosis using Deep Learning (DL). (2) We design a novel data augmentation method named RCC to promote the data enrichment of small resolution datasets. (3) All of our four technological improvements boost the performance of the original EfficientNet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

RIZAL, SYAMSUL, NUR IBRAHIM, NOR KUMALASARI CAESAR PRATIWI, SOFIA SAIDAH und RADEN YUNENDAH NUR FU’ADAH. „Deep Learning untuk Klasifikasi Diabetic Retinopathy menggunakan Model EfficientNet“. ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 8, Nr. 3 (27.08.2020): 693. http://dx.doi.org/10.26760/elkomika.v8i3.693.

Der volle Inhalt der Quelle
Annotation:
ABSTRAKDiabetic Retinopathy merupakan penyakit yang dapat mengakibatkan kebutaan mata yang disebabkan oleh adanya komplikasi penyakit diabetes melitus. Oleh karena itu mendeteksi secara dini sangat diperlukan untuk mencegah bertambah parahnya penyakit tersebut. Penelitian ini merancang sebuah sistem yang dapat mendeteksi Diabetic Retinopathy berbasis Deep Learning dengan menggunakan Convolutional Neural Network (CNN). EfficientNet model digunakan untuk melatih dataset yang telah di pre-prosesing sebelumnya. Hasil dari penelitian tersebut didapatkan akurasi sebesar 79.8% yang dapat mengklasifikasi 5 level penyakit Diabetic Retinopathy.Kata kunci: Diabetic Retinopathy, Deep Learning, CNN, EfficientNet, Diabetic Classification ABSTRACTDiabetic Retinopathy is a diseases which can cause blindness in the eyes because of the complications of diabetes mellitus. Therefore, an early detection for this diseases is very important to prevent the diseases become severe. This research builds the system which can detect the Diabetic Retinopathy based on Deep Learning by using Convolutional Neural Network (CNN). EfficientNet model is used to trained the dataset which have been pre-prossed. The result shows that the system can clasiffy the 5 level of Diabetic Retinopathy with accuracy 79.8%. Keywords: Diabetic Retinopathy, Deep Learning, CNN, EfficientNet, Diabetic Classification
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Et. al., Ushasukhanya S,. „SMART ELECTRICITY CONSERVATION SYSTEM USING EFFICIENTNET“. INFORMATION TECHNOLOGY IN INDUSTRY 9, Nr. 2 (12.04.2021): 978–83. http://dx.doi.org/10.17762/itii.v9i2.440.

Der volle Inhalt der Quelle
Annotation:
Conservation of electric resource has been one of the important challenges over the decades. Worldwide, many nations are struggling to conserve and to bridge the gap between the demand and production of the resource. Though many measures like several Government acts, replacing existing products with energy conserving products and many solar based systems are being invented and used in practise, the demand and the need to preserve the resource still persists. Hence, this paper focuses on a novel technique to conserve the electric resource using a deep learning technique. The system uses Convolutional Neural Networks to identify and localize humans in the CCTV footages using EfficientNet, a deep transfer learning model. The classifier processes and yields its output to an embedded Arduino microcontroller, after detecting the presence/absence of human. The microcontroller enables/disables the electric power supply in the area of human’s presence/absence, based on the classifier’s output respectively. The system achieves an accuracy percentage of 84.2% in detecting humans in the footages with the subsequent enabling/disabling of electric power resource to conserve electricity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Afzaal, Hassan, Aitazaz A. Farooque, Arnold W. Schumann, Nazar Hussain, Andrew McKenzie-Gopsill, Travis Esau, Farhat Abbas und Bishnu Acharya. „Detection of a Potato Disease (Early Blight) Using Artificial Intelligence“. Remote Sensing 13, Nr. 3 (25.01.2021): 411. http://dx.doi.org/10.3390/rs13030411.

Der volle Inhalt der Quelle
Annotation:
This study evaluated the potential of using machine vision in combination with deep learning (DL) to identify the early blight disease in real-time for potato production systems. Four fields were selected to collect images (n = 5199) of healthy and diseased potato plants under variable lights and shadow effects. A database was constructed using DL to identify the disease infestation at different stages throughout the growing season. Three convolutional neural networks (CNNs), namely GoogleNet, VGGNet, and EfficientNet, were trained using the PyTorch framework. The disease images were classified into three classes (2-class, 4-class, and 6-class) for accurate disease identification at different growth stages. Results of 2-class CNNs for disease identification revealed the significantly better performance of EfficientNet and VGGNet when compared with the GoogleNet (FScore range: 0.84–0.98). Results of 4-Class CNNs indicated better performance of EfficientNet when compared with other CNNs (FScore range: 0.79–0.94). Results of 6-class CNNs showed similar results as 4-class, with EfficientNet performing the best. GoogleNet, VGGNet, and EfficientNet inference time values ranged from 6.8–8.3, 2.1–2.5, 5.95–6.53 frames per second, respectively, on a Dell Latitude 5580 using graphical processing unit (GPU) mode. Overall, the CNNs and DL frameworks used in this study accurately classified the early blight disease at different stages. Site-specific application of fungicides by accurately identifying the early blight infected plants has a strong potential to reduce agrochemicals use, improve the profitability of potato growers, and lower environmental risks (runoff of fungicides to water bodies).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Duong, Linh T., Phuong T. Nguyen, Claudio Di Sipio und Davide Di Ruscio. „Automated fruit recognition using EfficientNet and MixNet“. Computers and Electronics in Agriculture 171 (April 2020): 105326. http://dx.doi.org/10.1016/j.compag.2020.105326.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bazi, Yakoub, Mohamad M. Al Rahhal, Haikel Alhichri und Naif Alajlan. „Simple Yet Effective Fine-Tuning of Deep CNNs Using an Auxiliary Classification Loss for Remote Sensing Scene Classification“. Remote Sensing 11, Nr. 24 (05.12.2019): 2908. http://dx.doi.org/10.3390/rs11242908.

Der volle Inhalt der Quelle
Annotation:
The current literature of remote sensing (RS) scene classification shows that state-of-the-art results are achieved using feature extraction methods, where convolutional neural networks (CNNs) (mostly VGG16 with 138.36 M parameters) are used as feature extractors and then simple to complex handcrafted modules are added for additional feature learning and classification, thus coming back to feature engineering. In this paper, we revisit the fine-tuning approach for deeper networks (GoogLeNet and Beyond) and show that it has not been well exploited due to the negative effect of the vanishing gradient problem encountered when transferring knowledge to small datasets. The aim of this work is two-fold. Firstly, we provide best practices for fine-tuning pre-trained CNNs using the root-mean-square propagation (RMSprop) method. Secondly, we propose a simple yet effective solution for tackling the vanishing gradient problem by injecting gradients at an earlier layer of the network using an auxiliary classification loss function. Then, we fine-tune the resulting regularized network by optimizing both the primary and auxiliary losses. As for pre-trained CNNs, we consider in this work inception-based networks and EfficientNets with small weights: GoogLeNet (7 M) and EfficientNet-B0 (5.3 M) and their deeper versions Inception-v3 (23.83 M) and EfficientNet-B3 (12 M), respectively. The former networks have been used previously in the context of RS and yielded low accuracies compared to VGG16, while the latter are new state-of-the-art models. Extensive experimental results on several benchmark datasets reveal clearly that if fine-tuning is done in an appropriate way, it can settle new state-of-the-art results with low computational cost.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Carmo, Diedre, Israel Campiotti, Lívia Rodrigues, Irene Fantini, Gustavo Pinheiro, Daniel Moraes, Rodrigo Nogueira, Leticia Rittner und Roberto Lotufo. „Rapidly deploying a COVID-19 decision support system in one of the largest Brazilian hospitals“. Health Informatics Journal 27, Nr. 3 (Juli 2021): 146045822110330. http://dx.doi.org/10.1177/14604582211033017.

Der volle Inhalt der Quelle
Annotation:
The COVID-19 pandemic generated research interest in automated models to perform classification and segmentation from medical imaging of COVID-19 patients, However, applications in real-world scenarios are still needed. We describe the development and deployment of COVID-19 decision support and segmentation system. A partnership with a Brazilian radiologist consortium, gave us access to 1000s of labeled computed tomography (CT) and X-ray images from São Paulo Hospitals. The system used EfficientNet and EfficientDet networks, state-of-the-art convolutional neural networks for natural images classification and segmentation, in a real-time scalable scenario in communication with a Picture Archiving and Communication System (PACS). Additionally, the system could reject non-related images, using header analysis and classifiers. We achieved CT and X-ray classification accuracies of 0.94 and 0.98, respectively, and Dice coefficient for lung and covid findings segmentations of 0.98 and 0.73, respectively. The median response time was 7 s for X-ray and 4 min for CT.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wang, Jing, Liu Yang, Zhanqiang Huo, Weifeng He und Junwei Luo. „Multi-Label Classification of Fundus Images With EfficientNet“. IEEE Access 8 (2020): 212499–508. http://dx.doi.org/10.1109/access.2020.3040275.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wu, Tao, Hongjin Zhu, Honghui Fan und Hongyan Zhou. „An improved target detection algorithm based on EfficientNet“. Journal of Physics: Conference Series 1983, Nr. 1 (01.07.2021): 012017. http://dx.doi.org/10.1088/1742-6596/1983/1/012017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Merino, Ibon, Jon Azpiazu, Anthony Remazeilles und Basilio Sierra. „3D Convolutional Neural Networks Initialized from Pretrained 2D Convolutional Neural Networks for Classification of Industrial Parts“. Sensors 21, Nr. 4 (04.02.2021): 1078. http://dx.doi.org/10.3390/s21041078.

Der volle Inhalt der Quelle
Annotation:
Deep learning methods have been successfully applied to image processing, mainly using 2D vision sensors. Recently, the rise of depth cameras and other similar 3D sensors has opened the field for new perception techniques. Nevertheless, 3D convolutional neural networks perform slightly worse than other 3D deep learning methods, and even worse than their 2D version. In this paper, we propose to improve 3D deep learning results by transferring the pretrained weights learned in 2D networks to their corresponding 3D version. Using an industrial object recognition context, we have analyzed different combinations of 3D convolutional networks (VGG16, ResNet, Inception ResNet, and EfficientNet), comparing the recognition accuracy. The highest accuracy is obtained with EfficientNetB0 using extrusion with an accuracy of 0.9217, which gives comparable results to state-of-the art methods. We also observed that the transfer approach enabled to improve the accuracy of the Inception ResNet 3D version up to 18% with respect to the score of the 3D approach alone.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Rafi, Taki Hasan. „A holistic comparison between deep learning techniques to determine Covid-19 patients utilizing chest X-Ray images“. Engineering and Applied Science Letters 3, Nr. 4 (23.12.2020): 85–93. http://dx.doi.org/10.30538/psrp-easl2020.0054.

Der volle Inhalt der Quelle
Annotation:
Novel coronavirus likewise called COVID-19 began in Wuhan, China in December 2019 and has now outspread over the world. Around 63 millions of people currently got influenced by novel coronavirus and it causes around 1,500,000 deaths. There are just about 600,000 individuals contaminated by COVID-19 in Bangladesh too. As it is an exceptionally new pandemic infection, its diagnosis is challenging for the medical community. In regular cases, it is hard for lower incoming countries to test cases easily. RT-PCR test is the most generally utilized analysis framework for COVID-19 patient detection. However, by utilizing X-ray image based programmed recognition can diminish the expense and testing time. So according to handling this test, it is important to program and effective recognition to forestall transmission to others. In this paper, author attempts to distinguish COVID-19 patients by chest X-ray images. Author executes various pre-trained deep learning models on the dataset such as Base-CNN, ResNet-50, DenseNet-121 and EfficientNet-B4. All the outcomes are compared to determine a suitable model for COVID-19 detection using chest X-ray images. Author also evaluates the results by AUC, where EfficientNet-B4 has 0.997 AUC, ResNet-50 has 0.967 AUC, DenseNet-121 has 0.874 AUC and the Base-CNN model has 0.762 AUC individually. The EfficientNet-B4 has achieved 98.86% accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Atila, Ümit, Murat Uçar, Kemal Akyol und Emine Uçar. „Plant leaf disease classification using EfficientNet deep learning model“. Ecological Informatics 61 (März 2021): 101182. http://dx.doi.org/10.1016/j.ecoinf.2020.101182.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Anggiratih, Endang, Sri Siswanti, Saly Kurnia Octaviani und Arum Sari. „Klasifikasi Penyakit Tanaman Padi Menggunakan Model Deep Learning Efficientnet B3 dengan Transfer Learning“. Jurnal Ilmiah SINUS 19, Nr. 1 (12.01.2021): 75. http://dx.doi.org/10.30646/sinus.v19i1.526.

Der volle Inhalt der Quelle
Annotation:
The level of rice productivity is influenced by several inhibiting factors, for example disease attack in rice plants. The slow and inappropriate treatment of rice plant can make the crop failure so that rice production and farmers' income decrease. The symptoms of rice disease are difficult to distinguish, especially in severe symptoms. Collaboration with other fields, especially computer science, is needed to classify diseases automatically so that the farmers can take action for plant treatment and the spread of disease can be controlled quickly. The classification of diseases based on images requires the best features/characteristics so that the disease can be classified. In this research, Deep Learning method, especially Convolutional Neural Network with EfficientNet B3 architecture, can extract features very well. In this research, the classification of brown spot and bacterial leaf disease by applying EfficientNet B3 with transfer learning reached 79.53% accuracy and 0.012 loss/error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Chowdhury, Muhammad E. H., Tawsifur Rahman, Amith Khandakar, Mohamed Arselene Ayari, Aftab Ullah Khan, Muhammad Salman Khan, Nasser Al-Emadi, Mamun Bin Ibne Reaz, Mohammad Tariqul Islam und Sawal Hamid Md Ali. „Automatic and Reliable Leaf Disease Detection Using Deep Learning Techniques“. AgriEngineering 3, Nr. 2 (20.05.2021): 294–312. http://dx.doi.org/10.3390/agriengineering3020020.

Der volle Inhalt der Quelle
Annotation:
Plants are a major source of food for the world population. Plant diseases contribute to production loss, which can be tackled with continuous monitoring. Manual plant disease monitoring is both laborious and error-prone. Early detection of plant diseases using computer vision and artificial intelligence (AI) can help to reduce the adverse effects of diseases and also overcome the shortcomings of continuous human monitoring. In this work, we propose the use of a deep learning architecture based on a recent convolutional neural network called EfficientNet on 18,161 plain and segmented tomato leaf images to classify tomato diseases. The performance of two segmentation models i.e., U-net and Modified U-net, for the segmentation of leaves is reported. The comparative performance of the models for binary classification (healthy and unhealthy leaves), six-class classification (healthy and various groups of diseased leaves), and ten-class classification (healthy and various types of unhealthy leaves) are also reported. The modified U-net segmentation model showed accuracy, IoU, and Dice score of 98.66%, 98.5%, and 98.73%, respectively, for the segmentation of leaf images. EfficientNet-B7 showed superior performance for the binary classification and six-class classification using segmented images with an accuracy of 99.95% and 99.12%, respectively. Finally, EfficientNet-B4 achieved an accuracy of 99.89% for ten-class classification using segmented images. It can be concluded that all the architectures performed better in classifying the diseases when trained with deeper networks on segmented images. The performance of each of the experimental studies reported in this work outperforms the existing literature.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Bansal, Prakhar, Rahul Kumar und Somesh Kumar. „Disease Detection in Apple Leaves Using Deep Convolutional Neural Network“. Agriculture 11, Nr. 7 (30.06.2021): 617. http://dx.doi.org/10.3390/agriculture11070617.

Der volle Inhalt der Quelle
Annotation:
The automatic detection of diseases in plants is necessary, as it reduces the tedious work of monitoring large farms and it will detect the disease at an early stage of its occurrence to minimize further degradation of plants. Besides the decline of plant health, a country’s economy is highly affected by this scenario due to lower production. The current approach to identify diseases by an expert is slow and non-optimal for large farms. Our proposed model is an ensemble of pre-trained DenseNet121, EfficientNetB7, and EfficientNet NoisyStudent, which aims to classify leaves of apple trees into one of the following categories: healthy, apple scab, apple cedar rust, and multiple diseases, using its images. Various Image Augmentation techniques are included in this research to increase the dataset size, and subsequentially, the model’s accuracy increases. Our proposed model achieves an accuracy of 96.25% on the validation dataset. The proposed model can identify leaves with multiple diseases with 90% accuracy. Our proposed model achieved a good performance on different metrics and can be deployed in the agricultural domain to identify plant health accurately and timely.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Xu, Renjie, Haifeng Lin, Kangjie Lu, Lin Cao und Yunfei Liu. „A Forest Fire Detection System Based on Ensemble Learning“. Forests 12, Nr. 2 (13.02.2021): 217. http://dx.doi.org/10.3390/f12020217.

Der volle Inhalt der Quelle
Annotation:
Due to the various shapes, textures, and colors of fires, forest fire detection is a challenging task. The traditional image processing method relies heavily on manmade features, which is not universally applicable to all forest scenarios. In order to solve this problem, the deep learning technology is applied to learn and extract features of forest fires adaptively. However, the limited learning and perception ability of individual learners is not sufficient to make them perform well in complex tasks. Furthermore, learners tend to focus too much on local information, namely ground truth, but ignore global information, which may lead to false positives. In this paper, a novel ensemble learning method is proposed to detect forest fires in different scenarios. Firstly, two individual learners Yolov5 and EfficientDet are integrated to accomplish fire detection process. Secondly, another individual learner EfficientNet is responsible for learning global information to avoid false positives. Finally, detection results are made based on the decisions of three learners. Experiments on our dataset show that the proposed method improves detection performance by 2.5% to 10.9%, and decreases false positives by 51.3%, without any extra latency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Chousangsuntorn, Chousak, Teerawat Tongloy, Santhad Chuwongin und Siridech Boonsang. „A Deep Learning System for Recognizing and Recovering Contaminated Slider Serial Numbers in Hard Disk Manufacturing Processes“. Sensors 21, Nr. 18 (18.09.2021): 6261. http://dx.doi.org/10.3390/s21186261.

Der volle Inhalt der Quelle
Annotation:
This paper outlines a system for detecting printing errors and misidentifications on hard disk drive sliders, which may contribute to shipping tracking problems and incorrect product delivery to end users. A deep-learning-based technique is proposed for determining the printed identity of a slider serial number from images captured by a digital camera. Our approach starts with image preprocessing methods that deal with differences in lighting and printing positions and then progresses to deep learning character detection based on the You-Only-Look-Once (YOLO) v4 algorithm and finally character classification. For character classification, four convolutional neural networks (CNN) were compared for accuracy and effectiveness: DarkNet-19, EfficientNet-B0, ResNet-50, and DenseNet-201. Experimenting on almost 15,000 photographs yielded accuracy greater than 99% on four CNN networks, proving the feasibility of the proposed technique. The EfficientNet-B0 network outperformed highly qualified human readers with the best recovery rate (98.4%) and fastest inference time (256.91 ms).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Liu, Jiangchuan, Mantao Wang, Lie Bao und Xiaofan Li. „EfficientNet based recognition of maize diseases by leaf image classification“. Journal of Physics: Conference Series 1693 (Dezember 2020): 012148. http://dx.doi.org/10.1088/1742-6596/1693/1/012148.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Fudholi, Dhomas Hatta, Yurio Windiatmoko, Nurdi Afrianto, Prastyo Eko Susanto, Magfirah Suyuti, Ahmad Fathan Hidayatullah und Ridho Rahmadi. „Image Captioning with Attention for Smart Local Tourism using EfficientNet“. IOP Conference Series: Materials Science and Engineering 1077, Nr. 1 (01.02.2021): 012038. http://dx.doi.org/10.1088/1757-899x/1077/1/012038.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Hao, Wangli, Meng Han, Hua Yang, Fei Hao und Fuzhong Li. „A novel Chinese herbal medicine classification approach based on EfficientNet“. Systems Science & Control Engineering 9, Nr. 1 (01.01.2021): 304–13. http://dx.doi.org/10.1080/21642583.2021.1901159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Xiao, Yihan, Jingyi Zhou, Yongzhi Yu und Limin Guo. „Active jamming recognition based on bilinear EfficientNet and attention mechanism“. IET Radar, Sonar & Navigation 15, Nr. 9 (02.05.2021): 957–68. http://dx.doi.org/10.1049/rsn2.12089.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Tian, Z., W. Wang, B. Tian, R. Zhan und J. Zhang. „RESOLUTION-AWARE NETWORK WITH ATTENTION MECHANISMS FOR REMOTE SENSING OBJECT DETECTION“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (03.08.2020): 909–16. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-909-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Nowadays, deep-learning-based object detection methods are more and more broadly applied to the interpretation of optical remote sensing image. Although these methods can obtain promising results in general conditions, the designed networks usually ignore the characteristics of remote sensing images, such as large image resolution and uneven distribution of object location. In this paper, an effective detection method based on the convolutional neural network is proposed. First, in order to make the designed network more suitable for the image resolution, EfficientNet is incorporated into the detection framework as the backbone network. EfficientNet employs the compound scaling method to adjust the depth and width of the network, thereby meeting the needs of different resolutions of input images. Then, the attention mechanism is introduced into the proposed method to improve the extracted feature maps. The attention mechanism makes the network more focused on the object areas while reducing the influence of the background areas, so as to reduce the influence of uneven distribution. Comprehensive evaluations on a public object detection dataset demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Yamamoto, Norio, Shintaro Sukegawa, Akira Kitamura, Ryosuke Goto, Tomoyuki Noda, Keisuke Nakano, Kiyofumi Takabatake et al. „Deep Learning for Osteoporosis Classification Using Hip Radiographs and Patient Clinical Covariates“. Biomolecules 10, Nr. 11 (10.11.2020): 1534. http://dx.doi.org/10.3390/biom10111534.

Der volle Inhalt der Quelle
Annotation:
This study considers the use of deep learning to diagnose osteoporosis from hip radiographs, and whether adding clinical data improves diagnostic performance over the image mode alone. For objective labeling, we collected a dataset containing 1131 images from patients who underwent both skeletal bone mineral density measurement and hip radiography at a single general hospital between 2014 and 2019. Osteoporosis was assessed from the hip radiographs using five convolutional neural network (CNN) models. We also investigated ensemble models with clinical covariates added to each CNN. The accuracy, precision, recall, specificity, negative predictive value (npv), F1 score, and area under the curve (AUC) score were calculated for each network. In the evaluation of the five CNN models using only hip radiographs, GoogleNet and EfficientNet b3 exhibited the best accuracy, precision, and specificity. Among the five ensemble models, EfficientNet b3 exhibited the best accuracy, recall, npv, F1 score, and AUC score when patient variables were included. The CNN models diagnosed osteoporosis from hip radiographs with high accuracy, and their performance improved further with the addition of clinical covariates from patient records.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Marques, Gonçalo, Deevyankar Agarwal und Isabel de la Torre Díez. „Automated medical diagnosis of COVID-19 through EfficientNet convolutional neural network“. Applied Soft Computing 96 (November 2020): 106691. http://dx.doi.org/10.1016/j.asoc.2020.106691.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Balasubramaniam, Vivekanadam. „Facemask Detection Algorithm on COVID Community Spread Control using EfficientNet Algorithm“. June 2021 3, Nr. 2 (28.06.2021): 110–22. http://dx.doi.org/10.36548/jscp.2021.2.005.

Der volle Inhalt der Quelle
Annotation:
Facemask has become mandatory in all COVID-infected communities present across the world. However, in real-life situations, checking the facemask code on each individual has become a difficult task. On the other hand, Automation systems are playing a widespread role in human community to automate different applications. As a result, it necessitates the need to develop a dependable automated method to monitor the facemask code to benefit humans. Recently, deep learning algorithms are emerging as a fast growing application, which has been developed for performing huge number of analysis and detection process. Henceforth, this paper proposes a deep learning based facemask detection process for automating the human effort involved in monitoring process. This work utilizes an openly available facemask detection dataset with 7553 images for the training and verification process, which is based on CNN driven EfficientNet architecture with an accuracy of about 97.12%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Yudistira, Novanto, Agus Wahyu Widodo und Bayu Rahayudi. „Deteksi Covid-19 pada Citra Sinar-X Dada Menggunakan Deep Learning yang Efisien“. Jurnal Teknologi Informasi dan Ilmu Komputer 7, Nr. 6 (02.12.2020): 1289. http://dx.doi.org/10.25126/jtiik.2020763651.

Der volle Inhalt der Quelle
Annotation:
<p><span lang="EN-GB">Deteksi Covid-19 merupakan tahapan penting untuk mengenali secara dini pasien terduga Covid-19 sehingga dapat dilakukan langkah lanjutan. Salah satu cara pendeteksian adalah melalui citra sinar-x paru. Namun demikian, selain dibutuhkan suatu model algoritma yang dapat menghasilkan akurasi tinggi, komputasi yang ringan merupakan hal yang dibutuhkan sehingga dapat diaplikasikan dalam alat pendeteksi. Model deep CNN dapat melakukan deteksi dengan akurat namun cenderung memerlukan penggunaan memori yang besar. CNN dengan parameter yang lebih sedikit dapat menghemat <em>storage </em></span><span lang="EN-GB">maupun penggunaan memori sehingga dapat berproses secara real time baik berupa alat pendeteksi maupun sistem pengambilan keputusan via <em>cloud</em>. Selain itu, CNN dengan parameter yang lebih kecil juga dapat untuk diaplikasikan pada FPGA dan perangkat keras lainnya yang mempunyai kapasitas memori terbatas. Untuk menghasilkan deteksi COVID-19 pada citra sinar-x paru yang akurat namun komputasinya juga ringan, kami mengusulkan arsitektur CNN kecil namun handal </span><span lang="EN-GB">dengan menggunakan teknik pertukaran <em>channel</em> yang disebut ShuffleNet. Dalam penelitian ini, kami menguji dan membandingkan kemampuan ShuffleNet, EfficientNet, dan ResNet50 karena mempunyai jumlah parameter yang lebih kecil dibanding CNN pada umumnya seperti VGGNet atau FullConv yang menggunakan lapisan konvolusi secara penuh namun mempunyai kemampuan deteksi yang mumpuni. Kami menggunakan 1125 citra sinar-x dan mencapai akurasi 86.93 % dengan jumlah parameter model yang 18.55 kali lebih sedikit dari EfficientNet dan 22.36 kali lebih sedikit dari ResNet50 untuk mendeteksi 3 kategori yaitu Covid-19, Pneumonia, dan normal melalui uji 5-<em>fold crossvalidation</em>. Memori yang diperlukan oleh masing-masing arsitektur CNN tersebut untuk melakukan sekali deteksi berhubungan secara linier dengan jumlah parameternya dimana ShuffleNet hanya memerlukan memori GPU sebesar 0.646 GB atau 0.43 kali dari ResNet50, 0.2 kali dari EfficientNet, dan 0.53 kali dari FullConv. Lebih lanjut, ShuffleNet melakukan deteksi paling cepat yaitu sebesar 0.0027 detik.</span></p><p><span lang="EN-GB"><br /></span></p><p><em><strong><span lang="EN-GB">Abstract</span></strong></em></p><p><em>Covid-19 detection is an important step in identifying early patients with suspected Covid-19 so that further steps can be taken. One way of detection is through pulmonary x-ray images. However, besides requiring an algorithm model that can produce high accuracy, lightweight computation is needed so that it can be applied in a detector. The deep CNN model can detect accurately but tends to require large memory usage. CNN with fewer parameters can save storage and memory usage so that it can process in real time both in the form of detection devices and decision-making systems via the cloud. In addition, CNN with smaller parameters can also be applied to FPGA and other hardware that have limited memory capacity. To produce accurate COVID-19 detection on x-ray images with lightweight computation, we propose a small but reliable CNN architecture using a channel shuffle technique called ShuffleNet. In this study, we tested and compared the capabilities of ShuffleNet, EfficientNet, and ResNet because they have a smaller number of parameters than usual deep CNN, such as VGGNet or FullConv which uses a full convolution layers with a robust detection capability. We used 1125 x-ray images and achieved an accuracy of 86.93% with a number of model parameters of 18.55 times less than EfficientNet and 22.36 times less than ResNet50 to detect 3 categories namely Covid-19, Pneumonia, and normal through the 5-fold cross validation. The memory required by each CNN architecture to perform one detection is linearly related to the number of parameters where ShuffleNet only requires GPU memory of 0.646 GB or 0.43 times that of ResNet50, 0.2 times of EfficientNet, and 0.53 times of FullConv. Furthermore, ShuffleNet performs the fastest detection at 0.0027 seconds. </em></p><p><em><strong><span lang="EN-GB"><br /></span></strong></em></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Alhichri, Haikel, Asma S. Alswayed, Yakoub Bazi, Nassim Ammour und Naif A. Alajlan. „Classification of Remote Sensing Images Using EfficientNet-B3 CNN Model With Attention“. IEEE Access 9 (2021): 14078–94. http://dx.doi.org/10.1109/access.2021.3051085.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Choi, Soohyun, Songho Yun und Byeongtae Ahn. „Implementation of Automated Baby Monitoring: CCBeBe“. Sustainability 12, Nr. 6 (23.03.2020): 2513. http://dx.doi.org/10.3390/su12062513.

Der volle Inhalt der Quelle
Annotation:
An automated baby monitoring service CCBeBe (CCtv Bebe) monitors infants’ lying posture and crying based on AI and provides parents-to-baby video streaming and voice transmission. Besides, parents can get a three-minute daily video diary made by detecting the baby’s emotion such as happiness. These main features are based on OpenPose, EfficientNet, WebRTC, and Facial-Expression-Recognition.Pytorch. The service is integrated into an Android application and works on two paired smartphones, with lowered hardware dependence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Chowdhury, Nihad Karim, Muhammad Ashad Kabir, Md Muhtadir Rahman und Noortaz Rezoana. „ECOVNet: a highly effective ensemble based deep learning model for detecting COVID-19“. PeerJ Computer Science 7 (26.05.2021): e551. http://dx.doi.org/10.7717/peerj-cs.551.

Der volle Inhalt der Quelle
Annotation:
The goal of this research is to develop and implement a highly effective deep learning model for detecting COVID-19. To achieve this goal, in this paper, we propose an ensemble of Convolutional Neural Network (CNN) based on EfficientNet, named ECOVNet, to detect COVID-19 from chest X-rays. To make the proposed model more robust, we have used one of the largest open-access chest X-ray data sets named COVIDx containing three classes—COVID-19, normal, and pneumonia. For feature extraction, we have applied an effective CNN structure, namely EfficientNet, with ImageNet pre-training weights. The generated features are transferred into custom fine-tuned top layers followed by a set of model snapshots. The predictions of the model snapshots (which are created during a single training) are consolidated through two ensemble strategies, i.e., hard ensemble and soft ensemble, to enhance classification performance. In addition, a visualization technique is incorporated to highlight areas that distinguish classes, thereby enhancing the understanding of primal components related to COVID-19. The results of our empirical evaluations show that the proposed ECOVNet model outperforms the state-of-the-art approaches and significantly improves detection performance with 100% recall for COVID-19 and overall accuracy of 96.07%. We believe that ECOVNet can enhance the detection of COVID-19 disease, and thus, underpin a fully automated and efficacious COVID-19 detection system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Abedalla, Ayat, Malak Abdullah, Mahmoud Al-Ayyoub und Elhadj Benkhelifa. „Chest X-ray pneumothorax segmentation using U-Net with EfficientNet and ResNet architectures“. PeerJ Computer Science 7 (29.06.2021): e607. http://dx.doi.org/10.7717/peerj-cs.607.

Der volle Inhalt der Quelle
Annotation:
Medical imaging refers to visualization techniques to provide valuable information about the internal structures of the human body for clinical applications, diagnosis, treatment, and scientific research. Segmentation is one of the primary methods for analyzing and processing medical images, which helps doctors diagnose accurately by providing detailed information on the body’s required part. However, segmenting medical images faces several challenges, such as requiring trained medical experts and being time-consuming and error-prone. Thus, it appears necessary for an automatic medical image segmentation system. Deep learning algorithms have recently shown outstanding performance for segmentation tasks, especially semantic segmentation networks that provide pixel-level image understanding. By introducing the first fully convolutional network (FCN) for semantic image segmentation, several segmentation networks have been proposed on its basis. One of the state-of-the-art convolutional networks in the medical image field is U-Net. This paper presents a novel end-to-end semantic segmentation model, named Ens4B-UNet, for medical images that ensembles four U-Net architectures with pre-trained backbone networks. Ens4B-UNet utilizes U-Net’s success with several significant improvements by adapting powerful and robust convolutional neural networks (CNNs) as backbones for U-Nets encoders and using the nearest-neighbor up-sampling in the decoders. Ens4B-UNet is designed based on the weighted average ensemble of four encoder-decoder segmentation models. The backbone networks of all ensembled models are pre-trained on the ImageNet dataset to exploit the benefit of transfer learning. For improving our models, we apply several techniques for training and predicting, including stochastic weight averaging (SWA), data augmentation, test-time augmentation (TTA), and different types of optimal thresholds. We evaluate and test our models on the 2019 Pneumothorax Challenge dataset, which contains 12,047 training images with 12,954 masks and 3,205 test images. Our proposed segmentation network achieves a 0.8608 mean Dice similarity coefficient (DSC) on the test set, which is among the top one-percent systems in the Kaggle competition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Hao, Guoyao, und Yifei Li. „Bone Age Estimation with X-ray Images Based on EfficientNet Pre-training Model“. Journal of Physics: Conference Series 1827, Nr. 1 (01.03.2021): 012082. http://dx.doi.org/10.1088/1742-6596/1827/1/012082.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Wu, Lin, Jie Ma, Yuehua Zhao und Hong Liu. „Apple Detection in Complex Scene Using the Improved YOLOv4 Model“. Agronomy 11, Nr. 3 (04.03.2021): 476. http://dx.doi.org/10.3390/agronomy11030476.

Der volle Inhalt der Quelle
Annotation:
To enable the apple picking robot to quickly and accurately detect apples under the complex background in orchards, we propose an improved You Only Look Once version 4 (YOLOv4) model and data augmentation methods. Firstly, the crawler technology is utilized to collect pertinent apple images from the Internet for labeling. For the problem of insufficient image data caused by the random occlusion between leaves, in addition to traditional data augmentation techniques, a leaf illustration data augmentation method is proposed in this paper to accomplish data augmentation. Secondly, due to the large size and calculation of the YOLOv4 model, the backbone network Cross Stage Partial Darknet53 (CSPDarknet53) of the YOLOv4 model is replaced by EfficientNet, and convolution layer (Conv2D) is added to the three outputs to further adjust and extract the features, which make the model lighter and reduce the computational complexity. Finally, the apple detection experiment is performed on 2670 expanded samples. The test results show that the EfficientNet-B0-YOLOv4 model proposed in this paper has better detection performance than YOLOv3, YOLOv4, and Faster R-CNN with ResNet, which are state-of-the-art apple detection model. The average values of Recall, Precision, and F1 reach 97.43%, 95.52%, and 96.54% respectively, the average detection time per frame of the model is 0.338 s, which proves that the proposed method can be well applied in the vision system of picking robots in the apple industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Guo, Xudong, Lulu Zhang, Youguo Hao, Linqi Zhang, Zhang Liu und Jiannan Liu. „Multiple abnormality classification in wireless capsule endoscopy images based on EfficientNet using attention mechanism“. Review of Scientific Instruments 92, Nr. 9 (01.09.2021): 094102. http://dx.doi.org/10.1063/5.0054161.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Khan, Irfan Ullah, Nida Aslam, Talha Anwar, Sumayh S. Aljameel, Mohib Ullah, Rafiullah Khan, Abdul Rehman und Nadeem Akhtar. „Remote Diagnosis and Triaging Model for Skin Cancer Using EfficientNet and Extreme Gradient Boosting“. Complexity 2021 (09.09.2021): 1–13. http://dx.doi.org/10.1155/2021/5591614.

Der volle Inhalt der Quelle
Annotation:
Due to the successful application of machine learning techniques in several fields, automated diagnosis system in healthcare has been increasing at a high rate. The aim of the study is to propose an automated skin cancer diagnosis and triaging model and to explore the impact of integrating the clinical features in the diagnosis and enhance the outcomes achieved by the literature study. We used an ensemble-learning framework, consisting of the EfficientNetB3 deep learning model for skin lesion analysis and Extreme Gradient Boosting (XGB) for clinical data. The study used PAD-UFES-20 data set consisting of six unbalanced categories of skin cancer. To overcome the data imbalance, we used data augmentation. Experiments were conducted using skin lesion merely and the combination of skin lesion and clinical data. We found that integration of clinical data with skin lesions enhances automated diagnosis accuracy. Moreover, the proposed model outperformed the results achieved by the previous study for the PAD-UFES-20 data set with an accuracy of 0.78, precision of 0.89, recall of 0.86, and F1 of 0.88. In conclusion, the study provides an improved automated diagnosis system to aid the healthcare professional and patients for skin cancer diagnosis and remote triaging.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Fudholi, Dhomas Hatta, Septia Rani, Dimastyo Muhaimin Arifin und Mochamad Rezky Satyatama. „Deep Learning-based Mobile Tourism Recommender System“. Scientific Journal of Informatics 8, Nr. 1 (10.05.2021): 111–18. http://dx.doi.org/10.15294/sji.v8i1.29262.

Der volle Inhalt der Quelle
Annotation:
A tourism recommendation system is a crucial solution to help tourists discover more diverse tourism destinations. A content-based approach in a recommender system can be an effective way of recommending items because it looks at the user's preference histories. For a cold-start problem in the tourism domain, where rating data or past access may not be found, we can treat the user's past-travel-photos as the histories data. Besides, the use of photos as an input makes the user experience seamless and more effortless. The current development in Artificial Intelligence-based services enable the possibilities to implement such experience. This research developed a Deep Learning-based mobile tourism recommender system that gives recommendations on local tourism destinations based on the user's favorite traveling photos. To provide a recommendation, we use cosine similarity to measure the similarity score between one's pictures and tourism destination's galleries through their label tag vectors. The label tag is inferred using an image classifier model that runs from a mobile user device through Tensorflow Lite. There are 40 label tags, which refer to local tourism destination categories, activities, and objects. The model is trained using state-of-the-art mobile deep learning architecture EfficientNet-Lite. We did several experiments and got an accuracy result of more than 85% on average, using EfficientNet-Lite as the base architecture. The implementation of the system as an Android application has been proved to give an excellent recommendation with Mean Absolute Percentage Error (MAPE) equals to 5%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Zhang, Pan, Ling Yang und Daoliang Li. „EfficientNet-B4-Ranger: A novel method for greenhouse cucumber disease recognition under natural complex environment“. Computers and Electronics in Agriculture 176 (September 2020): 105652. http://dx.doi.org/10.1016/j.compag.2020.105652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Yin, Xuqiang, Dihua Wu, Yuying Shang, Bo Jiang und Huaibo Song. „Using an EfficientNet-LSTM for the recognition of single Cow’s motion behaviours in a complicated environment“. Computers and Electronics in Agriculture 177 (Oktober 2020): 105707. http://dx.doi.org/10.1016/j.compag.2020.105707.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Yang, Ling, Huihui Yu, Yuelan Cheng, Siyuan Mei, Yanqing Duan, Daoliang Li und Yingyi Chen. „A dual attention network based on efficientNet-B2 for short-term fish school feeding behavior analysis in aquaculture“. Computers and Electronics in Agriculture 187 (August 2021): 106316. http://dx.doi.org/10.1016/j.compag.2021.106316.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Canayaz, Murat. „C+EffxNet: A novel hybrid approach for COVID-19 diagnosis on CT images based on CBAM and EfficientNet“. Chaos, Solitons & Fractals 151 (Oktober 2021): 111310. http://dx.doi.org/10.1016/j.chaos.2021.111310.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Aiman, Aisha, Yao Shen, Malika Bendechache, Irum Inayat und Teerath Kumar. „AUDD: Audio Urdu Digits Dataset for Automatic Audio Urdu Digit Recognition“. Applied Sciences 11, Nr. 19 (23.09.2021): 8842. http://dx.doi.org/10.3390/app11198842.

Der volle Inhalt der Quelle
Annotation:
The ongoing development of audio datasets for numerous languages has spurred research activities towards designing smart speech recognition systems. A typical speech recognition system can be applied in many emerging applications, such as smartphone dialing, airline reservations, and automatic wheelchairs, among others. Urdu is a national language of Pakistan and is also widely spoken in many other South Asian countries (e.g., India, Afghanistan). Therefore, we present a comprehensive dataset of spoken Urdu digits ranging from 0 to 9. Our dataset has 25,518 sound samples that are collected from 740 participants. To test the proposed dataset, we apply different existing classification algorithms on the datasets including Support Vector Machine (SVM), Multilayer Perceptron (MLP), and flavors of the EfficientNet. These algorithms serve as a baseline. Furthermore, we propose a convolutional neural network (CNN) for audio digit classification. We conduct the experiment using these networks, and the results show that the proposed CNN is efficient and outperforms the baseline algorithms in terms of classification accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Pham, Minh Tuan, Jong-Myon Kim und Cheol Hong Kim. „Intelligent Fault Diagnosis Method Using Acoustic Emission Signals for Bearings under Complex Working Conditions“. Applied Sciences 10, Nr. 20 (12.10.2020): 7068. http://dx.doi.org/10.3390/app10207068.

Der volle Inhalt der Quelle
Annotation:
Recent convolutional neural network (CNN) models in image processing can be used as feature-extraction methods to achieve high accuracy as well as automatic processing in bearing fault diagnosis. The combination of deep learning methods with appropriate signal representation techniques has proven its efficiency compared with traditional algorithms. Vital electrical machines require a strict monitoring system, and the accuracy of these machines’ monitoring systems takes precedence over any other factors. In this paper, we propose a new method for diagnosing bearing faults under variable shaft speeds using acoustic emission (AE) signals. Our proposed method predicts not only bearing fault types but also the degradation level of bearings. In the proposed technique, AE signals acquired from bearings are represented by spectrograms to obtain as much information as possible in the time–frequency domain. Feature extraction and classification processes are performed by deep learning using EfficientNet and a stochastic line-search optimizer. According to our various experiments, the proposed method can provide high accuracy and robustness under noisy environments compared with existing AE-based bearing fault diagnosis methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Huang, Zhiwen, Quan Zhou, Xingxing Zhu und Xuming Zhang. „Batch Similarity Based Triplet Loss Assembled into Light-Weighted Convolutional Neural Networks for Medical Image Classification“. Sensors 21, Nr. 3 (24.01.2021): 764. http://dx.doi.org/10.3390/s21030764.

Der volle Inhalt der Quelle
Annotation:
In many medical image classification tasks, there is insufficient image data for deep convolutional neural networks (CNNs) to overcome the over-fitting problem. The light-weighted CNNs are easy to train but they usually have relatively poor classification performance. To improve the classification ability of light-weighted CNN models, we have proposed a novel batch similarity-based triplet loss to guide the CNNs to learn the weights. The proposed loss utilizes the similarity among multiple samples in the input batches to evaluate the distribution of training data. Reducing the proposed loss can increase the similarity among images of the same category and reduce the similarity among images of different categories. Besides this, it can be easily assembled into regular CNNs. To appreciate the performance of the proposed loss, some experiments have been done on chest X-ray images and skin rash images to compare it with several losses based on such popular light-weighted CNN models as EfficientNet, MobileNet, ShuffleNet and PeleeNet. The results demonstrate the applicability and effectiveness of our method in terms of classification accuracy, sensitivity and specificity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Steinbuss, Georg, Mark Kriegsmann, Christiane Zgorzelski, Alexander Brobeil, Benjamin Goeppert, Sascha Dietrich, Gunhild Mechtersheimer und Katharina Kriegsmann. „Deep Learning for the Classification of Non-Hodgkin Lymphoma on Histopathological Images“. Cancers 13, Nr. 10 (17.05.2021): 2419. http://dx.doi.org/10.3390/cancers13102419.

Der volle Inhalt der Quelle
Annotation:
The diagnosis and the subtyping of non-Hodgkin lymphoma (NHL) are challenging and require expert knowledge, great experience, thorough morphological analysis, and often additional expensive immunohistological and molecular methods. As these requirements are not always available, supplemental methods supporting morphological-based decision making and potentially entity subtyping are required. Deep learning methods have been shown to classify histopathological images with high accuracy, but data on NHL subtyping are limited. After annotation of histopathological whole-slide images and image patch extraction, we trained and optimized an EfficientNet convolutional neuronal network algorithm on 84,139 image patches from 629 patients and evaluated its potential to classify tumor-free reference lymph nodes, nodal small lymphocytic lymphoma/chronic lymphocytic leukemia, and nodal diffuse large B-cell lymphoma. The optimized algorithm achieved an accuracy of 95.56% on an independent test set including 16,960 image patches from 125 patients after the application of quality controls. Automatic classification of NHL is possible with high accuracy using deep learning on histopathological images and routine diagnostic applications should be pursued.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Alamgunawan, Suriani, und Yosi Kristian. „Klasifikasi Tekstur Serat Kayu pada Citra Mikroskopik Veneer Memanfaatkan Deep Convolutional Neural Network“. Journal of Intelligent System and Computation 2, Nr. 1 (15.07.2021): 06–11. http://dx.doi.org/10.52985/insyst.v2i1.152.

Der volle Inhalt der Quelle
Annotation:
Convolutional Neural Network sebagai salah satu metode Deep Learning yang paling sering digunakan dalam klasifikasi, khususnya pada citra. Terkenal dengan kedalaman dan kemampuan dalam menentukan parameter sendiri, yang memungkinkan CNN mampu mengeksplor citra tanpa batas. Tujuan penelitian ini adalah untuk meneliti klasifikasi tekstur serat kayu pada citra mikroskopik veneer dengan CNN. Model CNN akan dibangun menggunakan MBConv dan arsitektur lapisan akan didesain menggunakan EfficientNet. Diharapkan dapat tercapai tingkat akurasi yang tinggi dengan penggunaan jumlah parameter yang sedikit. Dalam penelitian ini akan mendesain empat model arsitektur CNN, yaitu model RGB tanpa contrast stretching, RGB dengan contrast stretching, Grayscale tanpa contrast stretching dan Grayscale dengan contrast stretching. Proses ujicoba akan mencakup proses pelatihan, validasi dan uji pada masing-masing input citra pada setiap model arsitektur. Dengan menggunakan penghitungan softmax sebagai penentu kelas klasifikasi. SGD optimizer digunakan sebagai optimization dengan learning rate 1e-1. Hasil penelitian akan dievaluasi dengan menghitung akurasi dan error dengan menggunakan metode F1-score. Penggunaan channel RGB tanpa contrast stretching sebagai citra input menunjukkan hasil uji coba yang terbaik.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Salas, Joaquín, Pablo Vera, Marivel Zea-Ortiz, Elio-Atenogenes Villaseñor, Dagoberto Pulido und Alejandra Figueroa. „Fine-Grained Large-Scale Vulnerable Communities Mapping via Satellite Imagery and Population Census Using Deep Learning“. Remote Sensing 13, Nr. 18 (10.09.2021): 3603. http://dx.doi.org/10.3390/rs13183603.

Der volle Inhalt der Quelle
Annotation:
One of the challenges in the fight against poverty is the precise localization and assessment of vulnerable communities’ sprawl. The characterization of vulnerability is traditionally accomplished using nationwide census exercises, a burdensome process that requires field visits by trained personnel. Unfortunately, most countrywide censuses exercises are conducted only sporadically, making it difficult to track the short-term effect of policies to reduce poverty. This paper introduces a definition of vulnerability following UN-Habitat criteria, assesses different CNN machine learning architectures, and establishes a mapping between satellite images and survey data. Starting with the information corresponding to the 2,178,508 residential blocks recorded in the 2010 Mexican census and multispectral Landsat-7 images, multiple CNN architectures are explored. The best performance is obtained with EfficientNet-B3 achieving an area under the ROC and Precision-Recall curves of 0.9421 and 0.9457, respectively. This article shows that publicly available information, in the form of census data and satellite images, along with standard CNN architectures, may be employed as a stepping stone for the countrywide characterization of vulnerability at the residential block level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Ismail, Aya, Marwa Elpeltagy, Mervat Zaki und Kamal A. ElDahshan. „Deepfake video detection: YOLO-Face convolution recurrent approach“. PeerJ Computer Science 7 (21.09.2021): e730. http://dx.doi.org/10.7717/peerj-cs.730.

Der volle Inhalt der Quelle
Annotation:
Recently, the deepfake techniques for swapping faces have been spreading, allowing easy creation of hyper-realistic fake videos. Detecting the authenticity of a video has become increasingly critical because of the potential negative impact on the world. Here, a new project is introduced; You Only Look Once Convolution Recurrent Neural Networks (YOLO-CRNNs), to detect deepfake videos. The YOLO-Face detector detects face regions from each frame in the video, whereas a fine-tuned EfficientNet-B5 is used to extract the spatial features of these faces. These features are fed as a batch of input sequences into a Bidirectional Long Short-Term Memory (Bi-LSTM), to extract the temporal features. The new scheme is then evaluated on a new large-scale dataset; CelebDF-FaceForencics++ (c23), based on a combination of two popular datasets; FaceForencies++ (c23) and Celeb-DF. It achieves an Area Under the Receiver Operating Characteristic Curve (AUROC) 89.35% score, 89.38% accuracy, 83.15% recall, 85.55% precision, and 84.33% F1-measure for pasting data approach. The experimental analysis approves the superiority of the proposed method compared to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Gang, Sumyung, Ndayishimiye Fabrice, Daewon Chung und Joonjae Lee. „Character Recognition of Components Mounted on Printed Circuit Board Using Deep Learning“. Sensors 21, Nr. 9 (21.04.2021): 2921. http://dx.doi.org/10.3390/s21092921.

Der volle Inhalt der Quelle
Annotation:
As the size of components mounted on printed circuit boards (PCBs) decreases, defect detection becomes more important. The first step in an inspection involves recognizing and inspecting characters printed on parts attached to the PCB. In addition, since industrial fields that produce PCBs can change very rapidly, the style of the collected data may vary between collection sites and collection periods. Therefore, flexible learning data that can respond to all fields and time periods are needed. In this paper, large amounts of character data on PCB components were obtained and analyzed in depth. In addition, we proposed a method of recognizing characters by constructing a dataset that was robust with various fonts and environmental changes using a large amount of data. Moreover, a coreset capable of evaluating an effective deep learning model and a base set using n-pick sampling capable of responding to a continuously increasing dataset were proposed. Existing original data and the EfficientNet B0 model showed an accuracy of 97.741%. However, the accuracy of our proposed model was increased to 98.274% for the coreset of 8000 images per class. In particular, the accuracy was 98.921% for the base set with only 1900 images per class.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Xu, Yanghuan, Dongcheng Wang, Bowei Duan, Huaxin Yu und Hongmin Liu. „Copper Strip Surface Defect Detection Model Based on Deep Convolutional Neural Network“. Applied Sciences 11, Nr. 19 (25.09.2021): 8945. http://dx.doi.org/10.3390/app11198945.

Der volle Inhalt der Quelle
Annotation:
Surface defect automatic detection has great significance for copper strip production. The traditional machine vision for surface defect automatic detection of copper strip needs artificial feature design, which has a long cycle, and poor ability of versatility and robustness. However, deep learning can effectively solve these problems. Therefore, based on the deep convolution neural network and the transfer learning strategy, an intelligent recognition model of surface defects of copper strip is established in this paper. Firstly, the defects were classified in accordance with the mechanism and morphology, and the surface defect dataset of copper strip was established by comprehensively adopting image acquisition and image augmentation. Then, a two-class discrimination model was established to achieve the accurate discrimination of perfect and defect images. On this basis, four CNN models were adopted for the recognition of defect images. Among these models, the EfficientNet model through transfer learning strategy had the best comprehensive performance with a recognition accuracy rate of 93.05%. Finally, the interpretability and deficiency of the model were analysed by the class activation map and confusion matrix, which point toward the direction of further optimization for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Grossman, Rachel, Oz Haim, Shani Abramov, Ben Shofty und Moran Artzi. „Differentiating Small-Cell Lung Cancer From Non-Small-Cell Lung Cancer Brain Metastases Based on MRI Using Efficientnet and Transfer Learning Approach“. Technology in Cancer Research & Treatment 20 (01.01.2021): 153303382110049. http://dx.doi.org/10.1177/15330338211004919.

Der volle Inhalt der Quelle
Annotation:
Differentiation between small-cell lung cancer (SCLC) from non-small-cell lung cancer (NSCLC) brain metastases is crucial due to the different clinical behaviors of the two tumor types. We propose the use of a deep learning and transfer learning approach based on conventional magnetic resonance imaging (MRI) for non-invasive classification of SCLC vs. NSCLC brain metastases. Sixty-nine patients with brain metastasis of lung cancer origin were included. Of them, 44 patients had NSCLC and 25 patients had SCLC. Classification was performed with EfficientNet architecture on crop images of lesion areas and based on post-contrast T1-weighted, T2-weighted and FLAIR imaging input data. Evaluation of the model was carried out in a 5-fold cross-validation manner, and based on accuracy, precision, recall, F1 score and area under the receiver operating characteristic curve. The best classification results were obtained with multiparametric MRI input data (T1WI+c+FLAIR+T2WI), with a mean overall accuracy of 0.90 ± 0.04, and F1 score of 0.92 ± 0.05 for NSCLC and 0.87 ± 0.08 for SCLC for the validation data and an accuracy of 0.87 ± 0.05, with an F1 score of 0.88 ± 0.05 for NSCLC and 0.85 ± 0.05 for SCLC for the test dataset. The proposed method provides an automatic noninvasive method for the classification of brain metastasis with high sensitivity and specificity for differentiation between NSCLC vs. SCLC brain metastases. It may be used as a diagnostic tool for improving decision-making in the treatment of patients with these metastases. Further studies on larger patient samples are required to validate the current results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie