Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: YOLOv8.

Статті в журналах з теми "YOLOv8"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "YOLOv8".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Sharma, Pravek, Dr Rajesh Tyagi, and Dr Priyanka Dubey. "Optimizing Real-Time Object Detection- A Comparison of YOLO Models." International Journal of Innovative Research in Computer Science and Technology 12, no. 3 (May 2024): 57–74. http://dx.doi.org/10.55524/ijircst.2024.12.3.11.

Повний текст джерела
Анотація:
Gun and weapon détection plays a crucial role in security, surveillance, and law enforcement. This study conducts a comprehensive comparison of all available YOLO (You Only Look Once) models for their effectiveness in weapon detection. We train YOLOv1, YOLOv2, YOLOv3, YOLOv4, YOLOv5, YOLOv6, YOLOv7, and YOLOv8 on a custom dataset of 16,000 images containing guns, knives, and heavy weapons. Each model is evaluated on a validation set of 1,400 images, with mAP (mean average precision) as the primary performance metric. This extensive comparative analysis identifies the best performing YOLO variant for gun and weapon detection, providing valuable insights into the strengths and weaknesses of each model for this specific task.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tahir, Noor Ul Ain, Zhe Long, Zuping Zhang, Muhammad Asim, and Mohammed ELAffendi. "PVswin-YOLOv8s: UAV-Based Pedestrian and Vehicle Detection for Traffic Management in Smart Cities Using Improved YOLOv8." Drones 8, no. 3 (February 28, 2024): 84. http://dx.doi.org/10.3390/drones8030084.

Повний текст джерела
Анотація:
In smart cities, effective traffic congestion management hinges on adept pedestrian and vehicle detection. Unmanned Aerial Vehicles (UAVs) offer a solution with mobility, cost-effectiveness, and a wide field of view, and yet, optimizing recognition models is crucial to surmounting challenges posed by small and occluded objects. To address these issues, we utilize the YOLOv8s model and a Swin Transformer block and introduce the PVswin-YOLOv8s model for pedestrian and vehicle detection based on UAVs. Firstly, the backbone network of YOLOv8s incorporates the Swin Transformer model for global feature extraction for small object detection. Secondly, to address the challenge of missed detections, we opt to integrate the CBAM into the neck of the YOLOv8. Both the channel and the spatial attention modules are used in this addition because of how well they extract feature information flow across the network. Finally, we employ Soft-NMS to improve the accuracy of pedestrian and vehicle detection in occlusion situations. Soft-NMS increases performance and manages overlapped boundary boxes well. The proposed network reduced the fraction of small objects overlooked and enhanced model detection performance. Performance comparisons with different YOLO versions ( for example YOLOv3 extremely small, YOLOv5, YOLOv6, and YOLOv7), YOLOv8 variants (YOLOv8n, YOLOv8s, YOLOv8m, and YOLOv8l), and classical object detectors (Faster-RCNN, Cascade R-CNN, RetinaNet, and CenterNet) were used to validate the superiority of the proposed PVswin-YOLOv8s model. The efficiency of the PVswin-YOLOv8s model was confirmed by the experimental findings, which showed a 4.8% increase in average detection accuracy (mAP) compared to YOLOv8s on the VisDrone2019 dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wulanningrum, Resty, Anik Nur Handayani, and Aji Prasetya Wibawa. "Perbandingan Instance Segmentation Image Pada Yolo8." Jurnal Teknologi Informasi dan Ilmu Komputer 11, no. 4 (August 22, 2024): 753–60. http://dx.doi.org/10.25126/jtiik.1148288.

Повний текст джерела
Анотація:
Seorang pejalan kaki sangat rawan terhadap kecelakaan di jalan. Deteksi pejalan kaki merupakan salah satu cara untuk mengidentifikasi atau megklasifikasikan antara orang, jalan atau yang lainnya. Instance segmentation adalah salah satu proses untuk melakukan segmentasi antara orang dan jalan. Instance segmentation dan penggunaan yolov8 merupakan salah satu implementasi dalam deteksi pejalan kaki. Perbandingan segmentasi pada dataset Penn-Fundan Database menggunakan yolov8 dengan model yolov8n-seg, yolov8s-seg, yolov8m-seg, yolov8l-seg, yolov8x-seg. Penelitian ini menggunakan dataset publik pedestrian atau pejalan kaki dengan objek multi person yang diambil dari dataset Penn-Fudan Database. Dataset mempunyai 2 kelas, yaitu orang dan jalan. Hasil perbandingan penggunaan model yolov8 model segmentasi yang terbaik adalah menggunakan model yolov8l-seg. Hasil penelitian didapatkan Instance segmentation valid box pada data orang, mAP50 tertinggi pada yolov8l-seg dengan nilai 0,828 dan mAP50-95 adalah 0,723. Instance segmentation valid mask pada orang nilai mAP50 tertinggi pada yolov8l-seg dengan nilai 0,825 dan mAP50-95 adalah 0,645. Pada penelitian ini, yolov8l-seg menjadi nilai terbaik dibandingkan versi yang lain, karena berdasarkan nilai mAP tertinggi pada valid mask sebesar 0,825. Abstract A pedestrian is very vulnerable to road accidents. Pedestrian detection is one way to identify or classify between people, roads or others. Instance segmentation is one of the processes to segment people and roads. Instance segmentation and the use of yolov8 is one of the implementations in pedestrian detection. Comparison of segmentation on Penn-Fundan Database dataset using yolov8 with yolov8n-seg, yolov8s-seg, yolov8m-seg, yolov8l-seg, yolov8x-seg models. This research uses a public pedestrian dataset with multi-person objects taken from the Penn-Fudan Database dataset. The dataset has 2 classes, namely people and roads. The results of the comparison using the yolov8 model, the best segmentation model is using the yolov8l-seg model. The results obtained Instance segmentation valid box on people data, the highest mAP50 on yolov8l-seg with a value of 0.828 and mAP50-95 is 0.723. Instance segmentation valid mask on people the highest mAP50 value on yolov8l-seg with a value of 0.825 and mAP50-95 is 0.645. In his study, yolov8l-seg is the best value compared to other versions, because based on the highest mAP value on the valid mask of 0.825.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Panja, Eben, Hendry Hendry, and Christine Dewi. "YOLOv8 Analysis for Vehicle Classification Under Various Image Conditions." Scientific Journal of Informatics 11, no. 1 (February 28, 2024): 127–38. http://dx.doi.org/10.15294/sji.v11i1.49038.

Повний текст джерела
Анотація:
Purpose: The purpose of this research is to detect vehicle types in various image conditions using YOLOv8n, YOLOv8s, and YOLOv8m with augmentation.Methods: This research utilizes the YOLOv8 method on the DAWN dataset. The method involves using pre-trained Convolutional Neural Networks (CNN) to process the images and output the bounding boxes and classes of the detected objects. Additionally, data augmentation applied to improve the model's ability to recognize vehicles from different directions and viewpoints.Result: The mAP values for the test results are as follows: Without data augmentation, YOLOv8n achieved approximately 58%, YOLOv8s scored around 68.5%, and YOLOv8m achieved roughly 68.9%. However, after applying horizontal flip data augmentation, YOLOv8n's mAP increased to about 60.9%, YOLOv8s improved to about 62%, and YOLOv8m excelled with a mAP of about 71.2%. Using horizontal flip data augmentation improves the performance of all three YOLOv8 models. The YOLOv8m model achieves the highest mAP value of 71.2%, indicating its high effectiveness in detecting objects after applying horizontal flip augmentation. Novelty: This research introduces novelty by employing the latest version of YOLO, YOLOv8, and comparing its performance with YOLOv8n, YOLOv8s, and YOLOv8m. The use of data augmentation techniques, such as horizontal flip, to increase data variation is also novel in expanding the dataset and improving the model's ability to recognize objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Podder, Soumyajit, Abhishek Mallick, Sudipta Das, Kartik Sau, and Arijit Roy. "Accurate diagnosis of liver diseases through the application of deep convolutional neural network on biopsy images." AIMS Biophysics 10, no. 4 (2023): 453–81. http://dx.doi.org/10.3934/biophy.2023026.

Повний текст джерела
Анотація:
<abstract> <p>Accurate detection of non-alcoholic fatty liver disease (NAFLD) through biopsies is challenging. Manual detection of the disease is not only prone to human error but is also time-consuming. Using artificial intelligence and deep learning, we have successfully demonstrated the issues of the manual detection of liver diseases with a high degree of precision. This article uses various neural network-based techniques to assess non-alcoholic fatty liver disease. In this investigation, more than five thousand biopsy images were employed alongside the latest versions of the algorithms. To detect prominent characteristics in the liver from a collection of Biopsy pictures, we employed the YOLOv3, Faster R-CNN, YOLOv4, YOLOv5, YOLOv6, YOLOv7, YOLOv8, and SSD models. A highlighting point of this paper is comparing the state-of-the-art Instance Segmentation models, including Mask R-CNN, U-Net, YOLOv5 Instance Segmentation, YOLOv7 Instance Segmentation, and YOLOv8 Instance Segmentation. The extent of severity of NAFLD and non-alcoholic steatohepatitis was examined for liver cell ballooning, steatosis, lobular, and periportal inflammation, and fibrosis. Metrics used to evaluate the algorithms' effectiveness include accuracy, precision, specificity, and recall. Improved metrics are achieved by optimizing the hyperparameters of the associated models. Additionally, the liver is scored in order to analyse the information gleaned from biopsy images. Statistical analyses are performed to establish the statistical relevance in evaluating the score for different zones.</p> </abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liu, Yinzeng, Fandi Zeng, Hongwei Diao, Junke Zhu, Dong Ji, Xijie Liao, and Zhihuan Zhao. "YOLOv8 Model for Weed Detection in Wheat Fields Based on a Visual Converter and Multi-Scale Feature Fusion." Sensors 24, no. 13 (July 5, 2024): 4379. http://dx.doi.org/10.3390/s24134379.

Повний текст джерела
Анотація:
Accurate weed detection is essential for the precise control of weeds in wheat fields, but weeds and wheat are sheltered from each other, and there is no clear size specification, making it difficult to accurately detect weeds in wheat. To achieve the precise identification of weeds, wheat weed datasets were constructed, and a wheat field weed detection model, YOLOv8-MBM, based on improved YOLOv8s, was proposed. In this study, a lightweight visual converter (MobileViTv3) was introduced into the C2f module to enhance the detection accuracy of the model by integrating input, local (CNN), and global (ViT) features. Secondly, a bidirectional feature pyramid network (BiFPN) was introduced to enhance the performance of multi-scale feature fusion. Furthermore, to address the weak generalization and slow convergence speed of the CIoU loss function for detection tasks, the bounding box regression loss function (MPDIOU) was used instead of the CIoU loss function to improve the convergence speed of the model and further enhance the detection performance. Finally, the model performance was tested on the wheat weed datasets. The experiments show that the YOLOv8-MBM proposed in this paper is superior to Fast R-CNN, YOLOv3, YOLOv4-tiny, YOLOv5s, YOLOv7, YOLOv9, and other mainstream models in regards to detection performance. The accuracy of the improved model reaches 92.7%. Compared with the original YOLOv8s model, the precision, recall, mAP1, and mAP2 are increased by 10.6%, 8.9%, 9.7%, and 9.3%, respectively. In summary, the YOLOv8-MBM model successfully meets the requirements for accurate weed detection in wheat fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sun, Daozong, Kai Zhang, Hongsheng Zhong, Jiaxing Xie, Xiuyun Xue, Mali Yan, Weibin Wu, and Jiehao Li. "Efficient Tobacco Pest Detection in Complex Environments Using an Enhanced YOLOv8 Model." Agriculture 14, no. 3 (February 22, 2024): 353. http://dx.doi.org/10.3390/agriculture14030353.

Повний текст джерела
Анотація:
Due to the challenges of pest detection in complex environments, this research introduces a lightweight network for tobacco pest identification leveraging enhancements in YOLOv8 technology. Using YOLOv8 large (YOLOv8l) as the base, the neck layer of the original network is replaced with an asymptotic feature pyramid network (AFPN) network to reduce model parameters. A SimAM attention mechanism, which does not require additional parameters, is incorporated to improve the model’s ability to extract features. The backbone network’s C2f model is replaced with the VoV-GSCSP module to reduce the model’s computational requirements. Experiments show the improved YOLOv8 model achieves high overall performance. Compared to the original model, model parameters and GFLOPs are reduced by 52.66% and 19.9%, respectively, while mAP@0.5 is improved by 1%, recall by 2.7%, and precision by 2.4%. Further comparison with popular detection models YOLOv5 medium (YOLOv5m), YOLOv6 medium (YOLOv6m), and YOLOv8 medium (YOLOv8m) shows the improved model has the highest detection accuracy and lightest parameters for detecting four common tobacco pests, with optimal overall performance. The improved YOLOv8 detection model proposed facilitates precise, instantaneous pest detection and recognition for tobacco and other crops, securing high-accuracy, comprehensive pest identification.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Çakmakçı, Cihan. "Dijital Hayvancılıkta Yapay Zekâ ve İnsansız Hava Araçları: Derin Öğrenme ve Bilgisayarlı Görme İle Dağlık ve Engebeli Arazide Kıl Keçisi Tespiti, Takibi ve Sayımı." Turkish Journal of Agriculture - Food Science and Technology 12, no. 7 (July 14, 2024): 1162–73. http://dx.doi.org/10.24925/turjaf.v12i7.1162-1173.6701.

Повний текст джерела
Анотація:
Küresel gıda talebindeki hızlı artış nedeniyle yüksek kaliteli hayvansal ürün üretiminin artırılması gerekliliği, modern hayvancılık uygulamalarında teknoloji kullanımı ihtiyacını beraberinde getirmiştir. Özellikle ekstansif koşullarda küçükbaş hayvan yetiştiriciliğinde hayvanların otomatik olarak izlenmesi ve yönetilmesi, verimliliğin artırılması açısından büyük öneme sahiptir. Bu noktada, insansız hava araçlarından elde edilen yüksek çözünürlüklü görüntüler ile derin öğrenme algoritmalarının birleştirilmesi, sürülerin uzaktan takip edilmesinde etkili çözümler sunma potansiyeli taşımaktadır. Bu çalışmada, insansız hava araçlarından (İHA) elde edilen yüksek çözünürlüklü görüntüler üzerinde derin öğrenme algoritmaları kullanılarak kıl keçilerinin otomatik olarak tespit edilmesi, takip edilmesi ve sayılması amaçlanmıştır. Bu kapsamda, en güncel You Only Look Once (YOLOv8) mimari varyasyonlarından YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l ve YOLOv8x olmak üzere beş farklı model gerçek hayvan izleme uçuşlarından elde edilen İHA görüntüleri üzerinde eğitilmiştir. Elde edilen bulgulara göre, 0,95 F1 skoru ve 0,99 mAP50 değeri ile hem sınırlayıcı kutu tespiti hem de segmentasyon performansı açısından en yüksek başarımı YOLOv8s mimarisi göstermiştir. Sonuç olarak, önerilen derin öğrenme tabanlı yaklaşımın, İHA destekli hassas hayvancılık uygulamalarında etkili, düşük maliyetli ve sürdürülebilir bir çözüm olabileceği öngörülmektedir.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Arini Parhusip, Hanna, Suryasatriya Trihandaru, Denny Indrajaya, and Jane Labadin. "Implementation of YOLOv8-seg on store products to speed up the scanning process at point of sales." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 3 (September 1, 2024): 3291. http://dx.doi.org/10.11591/ijai.v13.i3.pp3291-3305.

Повний текст джерела
Анотація:
<p>You only look once v8 (YOLOv8)-seg and its variants are implemented to accelerate the collection of goods for a store for selling activity in Indonesia. The method used here is object detection and segmentation of these objects, a combination of detection and segmentation called instance segmentation. The novelty lies in the customization and optimization of YOLOv8-seg for detecting and segmenting 30 specific Indonesian products. The use of augmented data (125 images augmented into 1,250 images) enhances the model's ability to generalize and perform well in various scenarios. The small number of data points and the small number of epochs have proven reliable algorithms to implement on store products instead of using QR codes in a digital manner. Five models are examined, i.e., YOLOv8-seg, YOLOv8s-seg, YOLOv8m-seg, YOLOv8l-seg, and YOLOv8x-seg, with a data distribution of 64% for the training dataset, 16% for the validation dataset, and 20% for the testing dataset. The best model, YOLOv8l-seg, was obtained with the highest mean average precision (mAP) box value of 99.372% and a mAPmask value of 99.372% from testing the testing dataset. However, the YOLOv8mseg model can be the best alternative model with a mAPbox value of 99.330% since the number of parameters and the computational speed are the best compared to other models.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Salma, Kartika, and Syarif Hidayat. "Deteksi Antusiasme Siswa dengan Algoritma Yolov8 pada Proses Pembelajaran Daring." Jurnal Indonesia : Manajemen Informatika dan Komunikasi 5, no. 2 (May 10, 2024): 1611–18. http://dx.doi.org/10.35870/jimik.v5i2.716.

Повний текст джерела
Анотація:
The implementation of Face Emotion Recognition (FER) technology in online classes opens up new opportunities to effectively monitor students' emotional responses and adjust the teaching approach. Through FER, instructors can monitor students' emotional responses to learning materials in real-time and enable quick adjustments based on individual needs. Additionally, this technology can also be used to detect the level of enthusiasm or lack thereof among students towards the learning process, allowing for the optimization of teaching strategies. This study focuses on the implementation of the YOLOv8 algorithm in detecting students' enthusiasm, comparing the performance of YOLOv8n, YOLOv8s, YOLOv8m, and YOLOv8l models. Test results show that YOLOv8n performs the best with an accuracy rate of 95.3% and a fast inference time of 62ms, enabling real-time object detection. Thus, the application of YOLOv8 in this context aims to detect students' enthusiasm in real-time and allows instructors to quickly adjust their approach to meet students' needs. Furthermore, this research contributes to improving the quality of online learning by providing insights into students' emotional engagement and serving as a tool to help instructors better understand and respond appropriately to students' emotions during the online learning process.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Lou, Haitong, Xuehu Duan, Junmei Guo, Haiying Liu, Jason Gu, Lingyun Bi, and Haonan Chen. "DC-YOLOv8: Small-Size Object Detection Algorithm Based on Camera Sensor." Electronics 12, no. 10 (May 21, 2023): 2323. http://dx.doi.org/10.3390/electronics12102323.

Повний текст джерела
Анотація:
Traditional camera sensors rely on human eyes for observation. However, human eyes are prone to fatigue when observing objects of different sizes for a long time in complex scenes, and human cognition is limited, which often leads to judgment errors and greatly reduces efficiency. Object recognition technology is an important technology used to judge the object’s category on a camera sensor. In order to solve this problem, a small-size object detection algorithm for special scenarios was proposed in this paper. The advantage of this algorithm is that it not only has higher precision for small-size object detection but also can ensure that the detection accuracy for each size is not lower than that of the existing algorithm. There are three main innovations in this paper, as follows: (1) A new downsampling method which could better preserve the context feature information is proposed. (2) The feature fusion network is improved to effectively combine shallow information and deep information. (3) A new network structure is proposed to effectively improve the detection accuracy of the model. From the point of view of detection accuracy, it is better than YOLOX, YOLOR, YOLOv3, scaled YOLOv5, YOLOv7-Tiny, and YOLOv8. Three authoritative public datasets are used in these experiments: (a) In the Visdron dataset (small-size objects), the map, precision, and recall ratios of DC-YOLOv8 are 2.5%, 1.9%, and 2.1% higher than those of YOLOv8s, respectively. (b) On the Tinyperson dataset (minimal-size objects), the map, precision, and recall ratios of DC-YOLOv8 are 1%, 0.2%, and 1.2% higher than those of YOLOv8s, respectively. (c) On the PASCAL VOC2007 dataset (normal-size objects), the map, precision, and recall ratios of DC-YOLOv8 are 0.5%, 0.3%, and 0.4% higher than those of YOLOv8s, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Taufiqurrahman, Taufiqurrahman, Aji Prasetya Hadi, and Rully Emirza Siregar. "Evaluasi Performa Yolov8 Dalam Deteksi Objek Di Depan Kendaraan Dengan Variasi Kondisi Lingkungan." Jurnal Minfo Polgan 13, no. 2 (November 19, 2024): 1755–73. http://dx.doi.org/10.33395/jmp.v13i2.14228.

Повний текст джерела
Анотація:
Keselamatan berlalu lintas adalah isu global yang membutuhkan perhatian serius mengingat tingginya angka kecelakaan setiap tahun. Penelitian ini bertujuan mengevaluasi performa YOLOv8, algoritma deteksi objek berbasis deep learning, dalam mendeteksi elemen-elemen penting lalu lintas seperti kendaraan, pejalan kaki, dan rambu lalu lintas. Dataset yang digunakan terdiri dari video jalan biasa dan jalan tol, direkam pada enam waktu berbeda (08:00, 10:00, 12:00, 18:00, 20:00, dan 22:00) untuk menangkap variasi pencahayaan dan kepadatan lalu lintas. Tiga varian YOLOv8, yaitu YOLOv8n, YOLOv8s, dan YOLOv8m, diuji untuk menganalisis akurasi, jumlah deteksi, dan performa dalam berbagai kondisi lingkungan. Hasil penelitian menunjukkan bahwa YOLOv8m memiliki performa terbaik dengan rata-rata confidence score tertinggi, khususnya pada kondisi pencahayaan optimal di siang hari. YOLOv8s menawarkan keseimbangan antara efisiensi dan akurasi, sedangkan YOLOv8n menunjukkan keterbatasan dalam mendeteksi objek pada kondisi pencahayaan rendah dan kompleksitas lingkungan yang tinggi. Jalan tol, dengan lingkungan yang lebih terstruktur, memberikan hasil deteksi yang lebih konsisten dibandingkan jalan biasa yang menghadirkan tantangan berupa variasi objek dan pencahayaan. Kesimpulannya, YOLOv8m adalah model yang paling efektif untuk aplikasi berbasis keselamatan lalu lintas, sementara YOLOv8n cocok untuk perangkat keras dengan sumber daya terbatas. Penelitian selanjutnya diharapkan dapat mengoptimalkan deteksi pada objek kecil dan meningkatkan performa di kondisi pencahayaan rendah melalui pelatihan ulang model menggunakan dataset yang lebih kompleks.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Gong, Chuang, Wei Jiang, Dehua Zou, Weiwei Weng, and Hongjun Li. "An Insulator Fault Diagnosis Method Based on Multi-Mechanism Optimization YOLOv8." Applied Sciences 14, no. 19 (September 28, 2024): 8770. http://dx.doi.org/10.3390/app14198770.

Повний текст джерела
Анотація:
Aiming at the problem that insulator image backgrounds are complex and fault types are diverse, which makes it difficult for existing deep learning algorithms to achieve accurate insulator fault diagnosis, an insulator fault diagnosis method based on multi-mechanism optimization YOLOv8-DCP is proposed. Firstly, a feature extraction and fusion module, named CW-DRB, was designed. This module enhances the C2f structure of YOLOv8 by incorporating the dilation-wise residual module and the dilated re-param module. The introduction of this module improves YOLOv8’s capability for multi-scale feature extraction and multi-level feature fusion. Secondly, the CARAFE module, which is feature content-aware, was introduced to replace the up-sampling layer in YOLOv8n, thereby enhancing the model’s feature map reconstruction ability. Finally, an additional small-object detection layer was added to improve the detection accuracy of small defects. Simulation results indicate that YOLOv8-DCP achieves an accuracy of 97.7% and an mAP@0.5 of 93.9%. Compared to YOLOv5, YOLOv7, and YOLOv8n, the accuracy improved by 1.5%, 4.3%, and 4.8%, while the mAP@0.5 increased by 3.0%, 4.3%, and 3.1%. This results in a significant enhancement in the accuracy of insulator fault diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Gong, He, Jingyi Liu, Zhipeng Li, Hang Zhu, Lan Luo, Haoxu Li, Tianli Hu, Ying Guo, and Ye Mu. "GFI-YOLOv8: Sika Deer Posture Recognition Target Detection Method Based on YOLOv8." Animals 14, no. 18 (September 11, 2024): 2640. http://dx.doi.org/10.3390/ani14182640.

Повний текст джерела
Анотація:
As the sika deer breeding industry flourishes on a large scale, accurately assessing the health of these animals is of paramount importance. Implementing posture recognition through target detection serves as a vital method for monitoring the well-being of sika deer. This approach allows for a more nuanced understanding of their physical condition, ensuring the industry can maintain high standards of animal welfare and productivity. In order to achieve remote monitoring of sika deer without interfering with the natural behavior of the animals, and to enhance animal welfare, this paper proposes a sika deer individual posture recognition detection algorithm GFI-YOLOv8 based on YOLOv8. Firstly, this paper proposes to add the iAFF iterative attention feature fusion module to the C2f of the backbone network module, replace the original SPPF module with AIFI module, and use the attention mechanism to adjust the feature channel adaptively. This aims to enhance granularity, improve the model’s recognition, and enhance understanding of sika deer behavior in complex scenes. Secondly, a novel convolutional neural network module is introduced to improve the efficiency and accuracy of feature extraction, while preserving the model’s depth and diversity. In addition, a new attention mechanism module is proposed to expand the receptive field and simplify the model. Furthermore, a new pyramid network and an optimized detection head module are presented to improve the recognition and interpretation of sika deer postures in intricate environments. The experimental results demonstrate that the model achieves 91.6% accuracy in recognizing the posture of sika deer, with a 6% improvement in accuracy and a 4.6% increase in mAP50 compared to YOLOv8n. Compared to other models in the YOLO series, such as YOLOv5n, YOLOv7-tiny, YOLOv8n, YOLOv8s, YOLOv9, and YOLOv10, this model exhibits higher accuracy, and improved mAP50 and mAP50-95 values. The overall performance is commendable, meeting the requirements for accurate and rapid identification of the posture of sika deer. This model proves beneficial for the precise and real-time monitoring of sika deer posture in complex breeding environments and under all-weather conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kutyrev, A. I., I. G. Smirnov, and N. A. Andriyanov. "Neural network models of apple fruit identification in tree crowns: comparative analysis." Horticulture and viticulture, no. 5 (November 30, 2023): 56–63. http://dx.doi.org/10.31676/0235-2591-2023-5-56-63.

Повний текст джерела
Анотація:
The article presents the results of an analysis conducted from 2022 to 2023 to assess the quality of modern neural network models of apple fruit identification in tree crowns shown in images. In order to conduct the studies on identifying the best detector, the following neural networks were used: SSD (Single Shot MultiBox Detector), YOLOv4 (You Only Look Once, Version 4), YOLOv5, YOLOv7, and YOLOv8. The performance of the considered models of apple fruit identification was assessed using such binary classification metrics as precision, recall, accuracy, F-score, and AUC-ROCTotal (area under the curve). To assess the accuracy in predicting apple fruit identification, the mean absolute percentage error (MAPE) of the analyzed neural network models was calculated. The neural network performance analysis used 300 photographs taken at an apple garden. The conducted studies revealed that the SSD model provides lower speed and accuracy, as well as having high requirements for computing resources, which may limit its use in lower performance devices. The YOLOv4 model surpasses the YOLOv5 model in terms of accuracy by 10.2 %, yet the processing speed of the YOLOv5 model is over twice that of the YOLOv4 model. This fact makes the YOLOv5 model preferable for tasks related to real-time big data processing. The YOLOv8 model is superior to the YOLOv7 model in terms of speed (by 37.3 %); however, the accuracy of the YOLOv7 model is 9.4 % higher. The highest area under the Precision-Recall curve amounts to 0.94 when using the YOLOv7 model. This fact suggests a high probability that the classifier can accurately distinguish between the positive and negative values of the apple fruit class. MAPE calculation for the analyzed neural network models showed that the lowest error in apple fruit identification amounted to 5.64 % for the YOLOv7 model as compared to the true value determined using the visual method. The performance analysis of modern neural network models shows that the YOLO family of neural networks provides high speed and accuracy of object detection, which allows them to operate in real time. The use of transfer learning (tuning of only the last layers to solve highly specialized problems) to adjust the performance of models for different apple fruit varieties can further improve the accuracy of apple fruit identification.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Zhang, Yijian, Yong Yin, and Zeyuan Shao. "An Enhanced Target Detection Algorithm for Maritime Search and Rescue Based on Aerial Images." Remote Sensing 15, no. 19 (October 3, 2023): 4818. http://dx.doi.org/10.3390/rs15194818.

Повний текст джерела
Анотація:
Unmanned aerial vehicles (UAVs), renowned for their rapid deployment, extensive data collection, and high spatial resolution, are crucial in locating distressed individuals during search and rescue (SAR) operations. Challenges in maritime search and rescue include missed detections due to issues including sunlight reflection. In this study, we proposed an enhanced ABT-YOLOv7 algorithm for underwater person detection. This algorithm integrates an asymptotic feature pyramid network (AFPN) to preserve the target feature information. The BiFormer module enhances the model’s perception of small-scale targets, whereas the task-specific context decoupling (TSCODE) mechanism effectively resolves conflicts between localization and classification. Using quantitative experiments on a curated dataset, our model outperformed methods such as YOLOv3, YOLOv4, YOLOv5, YOLOv8, Faster R-CNN, Cascade R-CNN, and FCOS. Compared with YOLOv7, our approach enhances the mean average precision (mAP) from 87.1% to 91.6%. Therefore, our approach reduces the sensitivity of the detection model to low-lighting conditions and sunlight reflection, thus demonstrating enhanced robustness. These innovations have driven advancements in UAV technology within the maritime search and rescue domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Alayed, Asmaa, Rehab Alidrisi, Ekram Feras, Shahad Aboukozzana, and Alaa Alomayri. "Real-Time Inspection of Fire Safety Equipment using Computer Vision and Deep Learning." Engineering, Technology & Applied Science Research 14, no. 2 (April 2, 2024): 13290–98. http://dx.doi.org/10.48084/etasr.6753.

Повний текст джерела
Анотація:
The number of accidental fires in buildings has been significantly increased in recent years in Saudi Arabia. Fire Safety Equipment (FSE) plays a crucial role in reducing fire risks. However, this equipment is prone to defects and requires periodic checks and maintenance. Fire safety inspectors are responsible for visual inspection of safety equipment and reporting defects. As the traditional approach of manually checking each piece of equipment can be time-consuming and inaccurate, this study aims to improve the inspection processes of safety equipment. Using computer vision and deep learning techniques, a detection model was trained to visually inspect fire extinguishers and identify defects. Fire extinguisher images were collected, annotated, and augmented to create a dataset of 7,633 images with 16,092 labeled instances. Then, experiments were carried out using YOLOv5, YOLOv7, YOLOv8, and RT-DETR. Pre-trained models were used for transfer learning. A comparative analysis was performed to evaluate these models in terms of accuracy, speed, and model size. The results of YOLOv5n, YOLOv7, YOLOv8n, YOLOv8m, and RT-DETR indicated satisfactory accuracy, ranging between 83.1% and 87.2%. YOLOv8n was chosen as the most suitable due to its fastest inference time of 2.7 ms, its highest mAP0.5 of 87.2%, and its compact model size, making it ideal for real-time mobile applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Sun, Jihong, Zhaowen Li, Fusheng Li, Yingming Shen, Ye Qian, and Tong Li. "EF yolov8s: A Human–Computer Collaborative Sugarcane Disease Detection Model in Complex Environment." Agronomy 14, no. 9 (September 14, 2024): 2099. http://dx.doi.org/10.3390/agronomy14092099.

Повний текст джерела
Анотація:
The precise identification of disease traits in the complex sugarcane planting environment not only effectively prevents the spread and outbreak of common diseases but also allows for the real-time monitoring of nutrient deficiency syndrome at the top of sugarcane, facilitating the supplementation of relevant nutrients to ensure sugarcane quality and yield. This paper proposes a human–machine collaborative sugarcane disease detection method in complex environments. Initially, data on five common sugarcane diseases—brown stripe, rust, ring spot, brown spot, and red rot—as well as two nutrient deficiency conditions—sulfur deficiency and phosphorus deficiency—were collected, totaling 11,364 images and 10 high-definition videos captured by a 4K drone. The data sets were augmented threefold using techniques such as flipping and gamma adjustment to construct a disease data set. Building upon the YOLOv8 framework, the EMA attention mechanism and Focal loss function were added to optimize the model, addressing the complex backgrounds and imbalanced positive and negative samples present in the sugarcane data set. Disease detection models EF-yolov8s, EF-yolov8m, EF-yolov8n, EF-yolov7, and EF-yolov5n were constructed and compared. Subsequently, five basic instance segmentation models of YOLOv8 were used for comparative analysis, validated using nutrient deficiency condition videos, and a human–machine integrated detection model for nutrient deficiency symptoms at the top of sugarcane was constructed. The experimental results demonstrate that our improved EF-yolov8s model outperforms other models, achieving mAP_0.5, precision, recall, and F1 scores of 89.70%, 88.70%, 86.00%, and 88.00%, respectively, highlighting the effectiveness of EF-yolov8s for sugarcane disease detection. Additionally, yolov8s-seg achieves an average precision of 80.30% with a smaller number of parameters, outperforming other models by 5.2%, 1.9%, 2.02%, and 0.92% in terms of mAP_0.5, respectively, effectively detecting nutrient deficiency symptoms and addressing the challenges of sugarcane growth monitoring and disease detection in complex environments using computer vision technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Jiang, Tao, Jie Zhou, Binbin Xie, Longshen Liu, Chengyue Ji, Yao Liu, Binghan Liu, and Bo Zhang. "Improved YOLOv8 Model for Lightweight Pigeon Egg Detection." Animals 14, no. 8 (April 19, 2024): 1226. http://dx.doi.org/10.3390/ani14081226.

Повний текст джерела
Анотація:
In response to the high breakage rate of pigeon eggs and the significant labor costs associated with egg-producing pigeon farming, this study proposes an improved YOLOv8-PG (real versus fake pigeon egg detection) model based on YOLOv8n. Specifically, the Bottleneck in the C2f module of the YOLOv8n backbone network and neck network are replaced with Fasternet-EMA Block and Fasternet Block, respectively. The Fasternet Block is designed based on PConv (Partial Convolution) to reduce model parameter count and computational load efficiently. Furthermore, the incorporation of the EMA (Efficient Multi-scale Attention) mechanism helps mitigate interference from complex environments on pigeon-egg feature-extraction capabilities. Additionally, Dysample, an ultra-lightweight and effective upsampler, is introduced into the neck network to further enhance performance with lower computational overhead. Finally, the EXPMA (exponential moving average) concept is employed to optimize the SlideLoss and propose the EMASlideLoss classification loss function, addressing the issue of imbalanced data samples and enhancing the model’s robustness. The experimental results showed that the F1-score, mAP50-95, and mAP75 of YOLOv8-PG increased by 0.76%, 1.56%, and 4.45%, respectively, compared with the baseline YOLOv8n model. Moreover, the model’s parameter count and computational load are reduced by 24.69% and 22.89%, respectively. Compared to detection models such as Faster R-CNN, YOLOv5s, YOLOv7, and YOLOv8s, YOLOv8-PG exhibits superior performance. Additionally, the reduction in parameter count and computational load contributes to lowering the model deployment costs and facilitates its implementation on mobile robotic platforms.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Ramadhani, Zahra Cahya, and Dimas Firmanda Al Riza. "Model Deteksi Mikroalga Spirulina platensis dan Chlorella vulgaris Berbasis Convolutional Neural Network YOLOv8." Jurnal Komputer dan Informatika 12, no. 2 (October 31, 2024): 110–19. https://doi.org/10.35508/jicon.v12i2.15375.

Повний текст джерела
Анотація:
Mikroalga merupakan organisme mikroskopis bersel tunggal yang hidup di berbagai perairan. Mikroalga seperti Spirulina platensis dan Chlorella vulgaris berpotensi menjadi sumber bioenergi sehingga mulai banyak dikultivasi. Kultivasi ini umumnya masih melakukan pemantauan jumlah/kepadatan sel mikroalga secara manual menggunakan hemositometer yang lebih lama dan rentan human error. Penelitian ini bertujuan untuk mengembangkan model deteksi mikroalga Spirulina platensis dan Chlorella vulgaris berbasis citra mikroskopis dan Convolutional Neural Network menggunakan YOLOv8. Metodologi penelitian mencakup persiapan sampel (pengenceran dan pengukuran optical density), penentuan kepadatan terbaik, akuisisi citra, anotasi citra, pembuatan dataset citra, pelatihan model YOLOv8, dan evaluasi kinerja model. Penentuan kepadatan terbaik bertujuan untuk mendapatkan citra mikroskopis yang baik. Akuisisi citra dilakukan menggunakan mikroskop binokuler dan menghasilkan 560 gambar yang kemudian dianotasi. Model YOLOv8n, YOLOv8s, dan YOLOv8m dilatih dengan default hyperparameter di Google Colaboratory untuk mengetahui pengaruh augmentasi terhadap akurasi model. Evaluasi kinerja model dilakukan pada model YOLOv8 terpilih dan dianalisis nilai mAP50. Hasil penelitian menunjukkan bahwa augmentasi (crop, brightness, dan blur) menghasilkan mAP train dan test tertinggi pada model YOLOv8m, yakni 0,945 dan 0,913. Model YOLOv8m ini dilatih kembali dengan variasi hyperparameters dan didapatkan konfigurasi terbaik pada optimizer SGD, epoch 50, dan learning rate 0,01 dengan mAP train dan test sebesar 0,934 dan 0,925. Namun, training 29 epoch dapat menghasilkan akurasi 0,8535 yang memperkecil overfitting serta pemborosan sumber daya. Kesimpulannya, penelitian ini dapat mempermudah peneliti maupun industri dalam melakukan penghitungan jumlah mikroalga secara otomatis dan lebih efisien.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ma, Na, Yulong Wu, Yifan Bo, and Hongwen Yan. "Chili Pepper Object Detection Method Based on Improved YOLOv8n." Plants 13, no. 17 (August 28, 2024): 2402. http://dx.doi.org/10.3390/plants13172402.

Повний текст джерела
Анотація:
In response to the low accuracy and slow detection speed of chili recognition in natural environments, this study proposes a chili pepper object detection method based on the improved YOLOv8n. Evaluations were conducted among YOLOv5n, YOLOv6n, YOLOv7-tiny, YOLOv8n, YOLOv9, and YOLOv10 to select the optimal model. YOLOv8n was chosen as the baseline and improved as follows: (1) Replacing the YOLOv8 backbone with the improved HGNetV2 model to reduce floating-point operations and computational load during convolution. (2) Integrating the SEAM (spatially enhanced attention module) into the YOLOv8 detection head to enhance feature extraction capability under chili fruit occlusion. (3) Optimizing feature fusion using the dilated reparam block module in certain C2f (CSP bottleneck with two convolutions). (4) Substituting the traditional upsample operator with the CARAFE(content-aware reassembly of features) upsampling operator to further enhance network feature fusion capability and improve detection performance. On a custom-built chili dataset, the F0.5-score, mAP0.5, and mAP0.5:0.95 metrics improved by 1.98, 2, and 5.2 percentage points, respectively, over the original model, achieving 96.47%, 96.3%, and 79.4%. The improved model reduced parameter count and GFLOPs by 29.5% and 28.4% respectively, with a final model size of 4.6 MB. Thus, this method effectively enhances chili target detection, providing a technical foundation for intelligent chili harvesting processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Khalid, Saim, Hadi Mohsen Oqaibi, Muhammad Aqib, and Yaser Hafeez. "Small Pests Detection in Field Crops Using Deep Learning Object Detection." Sustainability 15, no. 8 (April 18, 2023): 6815. http://dx.doi.org/10.3390/su15086815.

Повний текст джерела
Анотація:
Deep learning algorithms, such as convolutional neural networks (CNNs), have been widely studied and applied in various fields including agriculture. Agriculture is the most important source of food and income in human life. In most countries, the backbone of the economy is based on agriculture. Pests are one of the major challenges in crop production worldwide. To reduce the overall production and economic loss from pests, advancement in computer vision and artificial intelligence may lead to early and small pest detection with greater accuracy and speed. In this paper, an approach for early pest detection using deep learning and convolutional neural networks has been presented. Object detection is applied on a dataset with images of thistle caterpillars, red beetles, and citrus psylla. The input dataset contains 9875 images of all the pests under different illumination conditions. State-of-the-art Yolo v3, Yolov3-Tiny, Yolov4, Yolov4-Tiny, Yolov6, and Yolov8 have been adopted in this study for detection. All of these models were selected based on their performance in object detection. The images were annotated in the Yolo format. Yolov8 achieved the highest mAP of 84.7% with an average loss of 0.7939, which is better than the results reported in other works when compared to small pest detection. The Yolov8 model was further integrated in an Android application for real time pest detection. This paper contributes the implementation of novel deep learning models, analytical methodology, and a workflow to detect pests in crops for effective pest management.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Ma, Shihao, Jiao Wu, Zhijun Zhang, and Yala Tong. "Application of Enhanced YOLOX for Debris Flow Detection in Remote Sensing Images." Applied Sciences 14, no. 5 (March 5, 2024): 2158. http://dx.doi.org/10.3390/app14052158.

Повний текст джерела
Анотація:
Addressing the limitations, including low automation, slow recognition speed, and limited universality, of current mudslide disaster detection techniques in remote sensing imagery, this study employs deep learning methods for enhanced mudslide disaster detection. This study evaluated six object detection models: YOLOv3, YOLOv4, YOLOv5, YOLOv7, YOLOv8, and YOLOX, conducting experiments on remote sensing image data in the study area. Utilizing transfer learning, mudslide remote sensing images were fed into these six models under identical experimental conditions for training. The experimental results demonstrate that YOLOX-Nano’s comprehensive performance surpasses that of the other models. Consequently, this study introduces an enhanced model based on YOLOX-Nano (RS-YOLOX-Nano), aimed at further improving the model’s generalization capabilities and detection performance in remote sensing imagery. The enhanced model achieves a mean average precision (mAP) value of 86.04%, a 3.53% increase over the original model, and boasts a precision rate of 89.61%. Compared to the conventional YOLOX-Nano algorithm, the enhanced model demonstrates superior efficacy in detecting mudflow targets within remote sensing imagery.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Chen, Haosong, Fujie Zhang, Chaofan Guo, Junjie Yi, and Xiangkai Ma. "SA-SRYOLOv8: A Research on Star Anise Variety Recognition Based on a Lightweight Cascaded Neural Network and Diversified Fusion Dataset." Agronomy 14, no. 10 (September 25, 2024): 2211. http://dx.doi.org/10.3390/agronomy14102211.

Повний текст джерела
Анотація:
Star anise, a widely popular spice, benefits from classification that enhances its economic value. In response to the low identification efficiency and accuracy of star anise varieties in the market, as well as the scarcity of related research, this study proposes an efficient identification method based on non-similarity augmentation and a lightweight cascaded neural network. Specifically, this approach utilizes a Siamese enhanced data network and a front-end SRGAN network to address sample imbalance and the challenge of identifying blurred images. The YOLOv8 model is further lightweight to reduce memory usage and increase detection speed, followed by optimization of the weight parameters through an extended training strategy. Additionally, a diversified fusion dataset of star anise, incorporating open data, was constructed to further validate the feasibility and effectiveness of this method. Testing showed that the SA-SRYOLOv8 detection model achieved an average detection precision (mAP) of 96.37%, with a detection speed of 146 FPS. Ablation experiment results showed that compared to the original YOLOv8 and the improved YOLOv8, the cascade model’s mAP increased by 0.09 to 0.81 percentage points. Additionally, when compared to mainstream detection models such as SSD, Fast R-CNN, YOLOv3, YOLOv5, YOLOX, and YOLOv7, the cascade model’s mAP increased by 1.81 to 19.7 percentage points. Furthermore, the model was significantly lighter, at only about 7.4% of the weight of YOLOv3, and operated at twice the speed of YOLOv7. Visualization results demonstrated that the cascade model accurately detected multiple star anise varieties across different scenarios, achieving high-precision detection targets. The model proposed in this study can provide new theoretical frameworks and ideas for constructing real-time star anise detection systems, offering new technological applications for smart agriculture.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Yang, Shixiong, Jingfa Yao, and Guifa Teng. "Corn Leaf Spot Disease Recognition Based on Improved YOLOv8." Agriculture 14, no. 5 (April 25, 2024): 666. http://dx.doi.org/10.3390/agriculture14050666.

Повний текст джерела
Анотація:
Leaf spot disease is an extremely common disease in the growth process of maize in Northern China and its degree of harm is quite significant. Therefore, the rapid and accurate identification of maize leaf spot disease is crucial for reducing economic losses in maize. In complex field environments, traditional identification methods are susceptible to subjective interference and cannot quickly and accurately identify leaf spot disease through color or shape features. We present an advanced disease identification method utilizing YOLOv8. This method utilizes actual field images of diseased corn leaves to construct a dataset and accurately labels the diseased leaves in these images, thereby achieving rapid and accurate identification of target diseases in complex field environments. We have improved the model based on YOLOv8 by adding Slim-neck modules and GAM attention modules and introducing them to enhance the model’s ability to identify maize leaf spot disease. The enhanced YOLOv8 model achieved a precision (P) of 95.18%, a recall (R) of 89.11%, an average recognition accuracy (mAP50) of 94.65%, and an mAP50-95 of 71.62%, respectively. Compared to the original YOLOv8 model, the enhanced model showcased enhancements of 3.79%, 4.65%, 3.56%, and 7.3% in precision (P), recall (R), average recognition accuracy (mAP50), and mAP50-95, respectively. The model can effectively identify leaf spot disease and accurately calibrate its location. Under the same experimental conditions, we compared the improved model with the YOLOv3, YOLOv5, YOLOv6, Faster R-CNN, and SSD models. The results show that the improved model not only enhances performance, but also reduces parameter complexity and simplifies the network structure. The results indicated that the improved model enhanced performance, while reducing experimental time. Hence, the enhanced method proposed in this study, based on YOLOv8, exhibits the capability to identify maize leaf spot disease in intricate field environments, offering robust technical support for agricultural production.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Yang, Renxu, Debao Yuan, Maochen Zhao, Zhao Zhao, Liuya Zhang, Yuqing Fan, Guangyu Liang, and Yifei Zhou. "Camellia oleifera Tree Detection and Counting Based on UAV RGB Image and YOLOv8." Agriculture 14, no. 10 (October 12, 2024): 1789. http://dx.doi.org/10.3390/agriculture14101789.

Повний текст джерела
Анотація:
The detection and counting of Camellia oleifera trees are important parts of the yield estimation of Camellia oleifera. The ability to identify and count Camellia oleifera trees quickly has always been important in the context of research on the yield estimation of Camellia oleifera. Because of their specific growing environment, it is a difficult task to identify and count Camellia oleifera trees with high efficiency. In this paper, based on a UAV RGB image, three different types of datasets, i.e., a DOM dataset, an original image dataset, and a cropped original image dataset, were designed. Combined with the YOLOv8 model, the detection and counting of Camellia oleifera trees were carried out. By comparing YOLOv9 and YOLOv10 in four evaluation indexes, including precision, recall, mAP, and F1 score, Camellia oleifera trees in two areas were selected for prediction and compared with the real values. The experimental results show that the cropped original image dataset was better for the recognition and counting of Camellia oleifera, and the mAP values were 8% and 11% higher than those of the DOM dataset and the original image dataset, respectively. Compared to YOLOv5, YOLOv7, YOLOv9, and YOLOv10, YOLOv8 performed better in terms of the accuracy and recall rate, and the mAP improved by 3–8%, reaching 0.82. Regression analysis was performed on the predicted and measured values, and the average R2 reached 0.94. This research shows that a UAV RGB image combined with YOLOv8 provides an effective solution for the detection and counting of Camellia oleifera trees, which is of great significance for Camellia oleifera yield estimation and orchard management.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Xiong, Chenqin, Tarek Zayed, Xingyu Jiang, Ghasan Alfalah, and Eslam Mohammed Abelkader. "A Novel Model for Instance Segmentation and Quantification of Bridge Surface Cracks—The YOLOv8-AFPN-MPD-IoU." Sensors 24, no. 13 (July 1, 2024): 4288. http://dx.doi.org/10.3390/s24134288.

Повний текст джерела
Анотація:
Surface cracks are alluded to as one of the early signs of potential damage to infrastructures. In the same vein, their detection is an imperative task to preserve the structural health and safety of bridges. Human-based visual inspection is acknowledged as the most prevalent means of assessing infrastructures’ performance conditions. Nonetheless, it is unreliable, tedious, hazardous, and labor-intensive. This state of affairs calls for the development of a novel YOLOv8-AFPN-MPD-IoU model for instance segmentation and quantification of bridge surface cracks. Firstly, YOLOv8s-Seg is selected as the backbone network to carry out instance segmentation. In addition, an asymptotic feature pyramid network (AFPN) is incorporated to ameliorate feature fusion and overall performance. Thirdly, the minimum point distance (MPD) is introduced as a loss function as a way to better explore the geometric features of surface cracks. Finally, the middle aisle transformation is amalgamated with Euclidean distance to compute the length and width of segmented cracks. Analytical comparisons reveal that this developed deep learning network surpasses several contemporary models, including YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and Mask-RCNN. The YOLOv8s + AFPN + MPDIoU model attains a precision rate of 90.7%, a recall of 70.4%, an F1-score of 79.27%, mAP50 of 75.3%, and mAP75 of 74.80%. In contrast to alternative models, our proposed approach exhibits enhancements across performance metrics, with the F1-score, mAP50, and mAP75 increasing by a minimum of 0.46%, 1.3%, and 1.4%, respectively. The margin of error in the measurement model calculations is maintained at or below 5%. Therefore, the developed model can serve as a useful tool for the accurate characterization and quantification of different types of bridge surface cracks.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Hwang, Byeong Hyeon, and Mi Jin Noh. "Comparative Analysis of Toxic Marine Organism Detection Performance Across YOLO Models and Exploration of Applications in Smart Aquaculture Technology." Korean Institute of Smart Media 13, no. 11 (November 29, 2024): 22–29. https://doi.org/10.30693/smj.2024.13.11.22.

Повний текст джерела
Анотація:
The rise in sea temperatures due to global warming has accelerated the migration of marine species, leading to the frequent discovery of toxic marine organisms in domestic waters. The blue-ringed octopus in particular is very dangerous because it contains a deadly poison called tetrodotoxin. Therefore, early detection of these toxic species and minimizing the risk to human life is crucial. This study evaluates the effectiveness of using the latest object detection technology, the YOLO model, to detect toxic marine species, aiming to provide valuable information for the development of a smart fisheries system. The analysis results showed that YOLOv8 achieved the highest precision at 0.989, followed by YOLOv7 at 0.775 and YOLOv5m at 0.318. In terms of recall, YOLOv8 scored 0.969, YOLOv5l scored 0.845, and YOLOv7 scored 0.783. For mAP50 and mAP50-95 metrics, YOLOv8 also performed the best with scores of 0.978 and 0.834, respectively. Overall, YOLOv8 demonstrated the highest performance, indicating its strong suitability for real-time detection of toxic marine organisms. On the other hand, the YOLOv5 series showed lower performance, revealing limitations in detection under complex conditions. These findings suggest that the use of the latest YOLO model is essential for establishing an early warning system for toxic marine species.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Do, Van-Dinh, Van-Hung Le, Huu-Son Do, Van-Nam Phan, and Trung-Hieu Te. "TQU-HG dataset and comparative study for hand gesture recognition of RGB-based images using deep learning." Indonesian Journal of Electrical Engineering and Computer Science 34, no. 3 (June 1, 2024): 1603. http://dx.doi.org/10.11591/ijeecs.v34.i3.pp1603-1617.

Повний текст джерела
Анотація:
Hand gesture recognition has great applications in human-computer interaction (HCI), human-robot interaction (HRI), and supporting the deaf and mute. To build a hand gesture recognition model using deep learning (DL) with high results then needs to be trained on many data and in many different conditions and contexts. In this paper, we publish the TQU-HG dataset of large RGB images with low resolution (640×480) pixels, low light conditions, and fast speed (16 fps). TQU-HG dataset includes 60,000 images collected from 20 people (10 male, 10 female) with 15 gestures of both left and right hands. A comparative study with two branches: i) based on Mediapipe TML and ii) Based on convolutional neural networks (CNNs) (you only look once (YOLO); YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLO-Nas, single shot multiBox detector (SSD) VGG16, residual network (ResNet)18, ResNext50, ResNet152, ResNext50, MobileNet V3 small, and MobileNet V3 large), the architecture and operation of CNNs models are also introduced in detail. We especially fine-tune the model and evaluate it on TQU-HG and HaGRID datasets. The quantitative results of the training and testing are presented (F1-score of YOLOv8, YOLO-Nas, MobileNet V3 small, ResNet50 is 98.99%, 98.98%, 99.27%, 99.36%, respectively on the TQU-HG dataset and is 99.21%, 99.37%, 99.36%, 86.4%, 98.3%, respectively on the HaGRID dataset). The computation time of YOLOv8 is 6.19 fps on the CPU and 18.28 fps on the GPU.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Li, Qinjun, Guoyu Zhang, and Ping Yang. "CL-YOLOv8: Crack Detection Algorithm for Fair-Faced Walls Based on Deep Learning." Applied Sciences 14, no. 20 (October 16, 2024): 9421. http://dx.doi.org/10.3390/app14209421.

Повний текст джерела
Анотація:
Cracks pose a critical challenge in the preservation of historical buildings worldwide, particularly in fair-faced walls, where timely and accurate detection is essential to prevent further degradation. Traditional image processing methods have proven inadequate for effectively detecting building cracks. Despite global advancements in deep learning, crack detection under diverse environmental and lighting conditions remains a significant technical hurdle, as highlighted by recent international studies. To address this challenge, we propose an enhanced crack detection algorithm, CL-YOLOv8 (ConvNeXt V2-LSKA-YOLOv8). By integrating the well-established ConvNeXt V2 model as the backbone network into YOLOv8, the algorithm benefits from advanced feature extraction techniques, leading to a superior detection accuracy. This choice leverages ConvNeXt V2’s recognized strengths, providing a robust foundation for improving the overall model performance. Additionally, by introducing the LSKA (Large Separable Kernel Attention) mechanism into the SPPF structure, the feature receptive field is enlarged and feature correlations are strengthened, further enhancing crack detection accuracy in diverse environments. This study also contributes to the field by significantly expanding the dataset for fair-faced wall crack detection, increasing its size sevenfold through data augmentation and the inclusion of additional data. Our experimental results demonstrate that CL-YOLOv8 outperforms mainstream algorithms such as Faster R-CNN, YOLOv5s, YOLOv7-tiny, SSD, and various YOLOv8n/s/m/l/x models. CL-YOLOv8 achieves an accuracy of 85.3%, a recall rate of 83.2%, and a mean average precision (mAP) of 83.7%. Compared to the YOLOv8n base model, CL-YOLOv8 shows improvements of 0.9%, 2.3%, and 3.9% in accuracy, recall rate, and mAP, respectively. These results underscore the effectiveness and superiority of CL-YOLOv8 in crack detection, positioning it as a valuable tool in the global effort to preserve architectural heritage.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Özcan, Büşra, and Halit Bakır. "YAPAY ZEKA DESTEKLİ BEYİN GÖRÜNTÜLERİ ÜZERİNDE TÜMÖR TESPİTİ." International Conference on Pioneer and Innovative Studies 1 (June 13, 2023): 297–306. http://dx.doi.org/10.59287/icpis.847.

Повний текст джерела
Анотація:
Günümüzde yapay zeka uygulamaları sağlık sektöründe büyük bir ivme kazanmıştır. Hem hız hemdoğruluk açısından tercih edilmektedir. Bu çalışma kapsamında insan hayatı için kritik rol oynayan beyintümörleri tespiti için en güncel obje tespit algoritmaları uygulanmış ve en iyi sonucu veren model tespitedilmiştir. Popüler ve güncel YOLO modellerinden olan YOLOv5, YOLOv7 ve YOLOv8 uygulanmıştır.Model eğitilmeden önce beyin tümörleri tespitinde daha iyi sonuç vermesi adına görüntüler üzerinde renkdönüşümleri, histogram eşitleme, renk kanalları ve bu kanallar üzerinde filtre işlemleri gerçekleştirilmiş veen belirgin olan sonuç tüm görsellere uygulanmıştır. Bu işlemler sonucunda en iyi sonucu en güncel YOLOmodeli olan YOLOv8 %87 mAP değeri ile vermiştir.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Fudholi, Dhomas Hatta, Arrie Kurniawardhani, Gabriel Imam Andaru, Ahmad Azzam Alhanafi, and Nabil Najmudin. "YOLO-based Small-scaled Model for On-Shelf Availability in Retail." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 8, no. 2 (April 25, 2024): 265–71. http://dx.doi.org/10.29207/resti.v8i2.5600.

Повний текст джерела
Анотація:
On-shelf availability (OSA) in the retail industry plays a very crucial role in continuous sales. Product unavailability may lead to a bad impression on customers and reduce sales. The retail industry may continue to develop through the rapidly advancing technology era to thrive in a market where competition is increasingly tough. Along with technological advancements in recent decades, Artificial Intelligence has begun to be applied to support OSA, particularly using object detection technology. In this research, we develop a small-scaled object detection model based on four versions of You Only Look Once (YOLO) algorithm, namely YOLOv5-nano, YOLOv6-nano, YOLOv7-tiny, and YOLOv8-nano. The developed model can be used to support automatic detection of OSA. A small-scale model has developed in the sense of post-practical implementation through low-cost mobile applications. We also use the quantization method to reduce the model size, INT8 and FP16. This small-scaled model implementation also offers implementation flexibility. With a total of 7697 milk-based retail product images and 125 different product classes, the experiment results show that the developed YOLOv8-nano model, with a mAP50 score of 0.933 and an inference time of 13.4 ms, achieved the best performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Tsiunyk, B. S., and O. V. Muliarevych. "PERFORMANCE EVALUATION AND OPTIMIZATION OF YOLOV8 NEURAL NETWORK MODELS FOR TARGET RECOGNITION." Computer systems and network 6, no. 2 (December 2024): 239–49. https://doi.org/10.23939/csn2024.02.239.

Повний текст джерела
Анотація:
The objective of this research is to conduct a comprehensive performance analysis of various types of neural network (NN) models for target recognition. Specifically, this study focuses on evaluating the effectiveness and efficiency of yolov8n, yolov8s, yolov8m, and YOLO models in target recognition tasks. Leveraging cutting-edge technologies such as OpenCV, Python, and roboflow 3.0 FAST, the research aims to develop a robust methodology for assessing the performance of these NN models. The methodology includes the design and implementation of experiments to measure key metrics such as accuracy, speed, and resource utilization. Through meticulous analysis, this study aims to provide insights into the strengths and weaknesses of each model, facilitating informed decision-making for practical applications1. This paper presents the process of designing and conducting the performance analysis, highlighting the rationale behind the selection of specific technologies and methodologies. Furthermore, the study discusses the implications of the findings for future developments in target recognition systems. Keywords: yolov8, YOLO, OpenCV, NN model.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Wijaya, Ryan Satria, Santonius Santonius, Anugerah Wibisana, Eko Rudiawan Jamzuri, and Mochamad Ari Bagus Nugroho. "Comparative Study of YOLOv5, YOLOv7 and YOLOv8 for Robust Outdoor Detection." Journal of Applied Electrical Engineering 8, no. 1 (June 24, 2024): 37–43. http://dx.doi.org/10.30871/jaee.v8i1.7207.

Повний текст джерела
Анотація:
Object detection is one of the most popular applications among young people, especially among millennials and generation Z. The use of object detection has become widespread in various aspects of daily life, such as face recognition, traffic management, and autonomous vehicles. The use of object detection has expanded in various aspects of daily life, such as face recognition, traffic management, and autonomous vehicles. To perform object detection, large and complex datasets are required. Therefore, this research addresses what object detection algorithms are suitable for object detection. In this research, i will compare the performance of several algorithms that are popular among young people, such as YOLOv5, YOLOv7, and YOLOv8 models. By conducting several Experiment Results such as Detection Results, Distance Traveled Experiment Results, Confusion Matrix, and Experiment Results on Validation Dataset, I aim to provide insight into the advantages and disadvantages of these algorithms. This comparison will help young researchers choose the most suitable algorithm for their object detection task.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Huang, Yiqi, Hongtao Huang, Feng Qin, Ying Chen, Jianghua Zou, Bo Liu, Zaiyuan Li, et al. "YOLO-IAPs: A Rapid Detection Method for Invasive Alien Plants in the Wild Based on Improved YOLOv9." Agriculture 14, no. 12 (December 2, 2024): 2201. https://doi.org/10.3390/agriculture14122201.

Повний текст джерела
Анотація:
Invasive alien plants (IAPs) present a significant threat to ecosystems and agricultural production, necessitating rigorous monitoring and detection for effective management and control. To realize accurate and rapid detection of invasive alien plants in the wild, we proposed a rapid detection approach grounded in an advanced YOLOv9, referred to as YOLO-IAPs, which incorporated several key enhancements to YOLOv9, including replacing the down-sampling layers in the model’s backbone with a DynamicConv module, integrating a Triplet Attention mechanism into the model, and replacing the original CIoU with the MPDloU. These targeted enhancements collectively resulted in a substantial improvement in the model’s accuracy and robustness. Extensive training and testing on a self-constructed dataset demonstrated that the proposed model achieved an accuracy of 90.7%, with the corresponding recall, mAP50, and mAP50:95 measured at 84.3%, 91.2%, and 65.1%, and a detection speed of 72 FPS. Compared to the baseline, the proposed model showed increases of 0.2% in precision, 3.5% in recall, and 1.0% in mAP50. Additionally, YOLO-IAPs outperformed other state-of-the-art object detection models, including YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv10 series, Faster R-CNN, SSD, CenterNet, and RetinaNet, demonstrating superior detection capabilities. Ablation studies further confirmed that the proposed model was effective, contributing to the overall improvement in performance, which underscored its pre-eminence in the domain of invasive alien plant detection and offered a marked improvement in detection accuracy over traditional methodologies. The findings suggest that the proposed approach has the potential to advance the technological landscape of invasive plant monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Mao, Makara, Ahyoung Lee, and Min Hong. "Efficient Fabric Classification and Object Detection Using YOLOv10." Electronics 13, no. 19 (September 28, 2024): 3840. http://dx.doi.org/10.3390/electronics13193840.

Повний текст джерела
Анотація:
The YOLO (You Only Look Once) series is renowned for its real-time object detection capabilities in images and videos. It is highly relevant in industries like textiles, where speed and accuracy are critical. In the textile industry, accurate fabric type detection and classification are essential for improving quality control, optimizing inventory management, and enhancing customer satisfaction. This paper proposes a new approach using the YOLOv10 model, which offers enhanced detection accuracy, processing speed, and detection on the torn path of each type of fabric. We developed and utilized a specialized, annotated dataset featuring diverse textile samples, including cotton, hanbok, cotton yarn-dyed, and cotton blend plain fabrics, to detect the torn path in fabric. The YOLOv10 model was selected for its superior performance, leveraging advancements in deep learning architecture and applying data augmentation techniques to improve adaptability and generalization to the various textile patterns and textures. Through comprehensive experiments, we demonstrate the effectiveness of YOLOv10, which achieved an accuracy of 85.6% and outperformed previous YOLO variants in both precision and processing speed. Specifically, YOLOv10 showed a 2.4% improvement over YOLOv9, 1.8% over YOLOv8, 6.8% over YOLOv7, 5.6% over YOLOv6, and 6.2% over YOLOv5. These results underscore the significant potential of YOLOv10 in automating fabric detection processes, thereby enhancing operational efficiency and productivity in textile manufacturing and retail.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Zhang, Du, Kerang Cao, Kai Han, Changsu Kim, and Hoekyung Jung. "PAL-YOLOv8: A Lightweight Algorithm for Insulator Defect Detection." Electronics 13, no. 17 (September 3, 2024): 3500. http://dx.doi.org/10.3390/electronics13173500.

Повний текст джерела
Анотація:
To address the challenges of high model complexity and low accuracy in detecting small targets in insulator defect detection using UAV aerial imagery, we propose a lightweight algorithm, PAL-YOLOv8. Firstly, the baseline model, YOLOv8n, is enhanced by incorporating the PKI Block from PKINet to improve the C2f module, effectively reducing the model complexity and enhancing feature extraction capabilities. Secondly, Adown from YOLOv9 is employed in the backbone and neck for downsampling, which retains more feature information while reducing the feature map size, thus improving the detection accuracy. Additionally, Focaler-SIoU is used as the bounding-box regression loss function to improve model performance by focusing on different regression samples. Finally, pruning is applied to the improved model to further reduce its size. The experimental results show that PAL-YOLOv8 achieves an mAP50 of 95.0%, which represents increases of 5.5% and 2.6% over YOLOv8n and YOLOv9t, respectively. Furthermore, GFLOPs is only 3.9, the model size is just 2.7 MB, and the parameter count is only 1.24 × 106.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wang, Zejun, Shihao Zhang, Lijiao Chen, Wendou Wu, Houqiao Wang, Xiaohui Liu, Zongpei Fan, and Baijuan Wang. "Microscopic Insect Pest Detection in Tea Plantations: Improved YOLOv8 Model Based on Deep Learning." Agriculture 14, no. 10 (October 2, 2024): 1739. http://dx.doi.org/10.3390/agriculture14101739.

Повний текст джерела
Анотація:
Pest infestations in tea gardens are one of the common issues encountered during tea cultivation. This study introduces an improved YOLOv8 network model for the detection of tea pests to facilitate the rapid and accurate identification of early-stage micro-pests, addressing challenges such as small datasets and the difficulty of extracting phenotypic features of target pests in tea pest detection. Based on the original YOLOv8 network framework, this study adopts the SIoU optimized loss function to enhance the model’s learning ability for pest samples. AKConv is introduced to replace certain network structures, enhancing feature extraction capabilities and reducing the number of model parameters. Vision Transformer with Bi-Level Routing Attention is embedded to provide the model with a more flexible computation allocation and improve its ability to capture target position information. Experimental results show that the improved YOLOv8 network achieves a detection accuracy of 98.16% for tea pest detection, which is a 2.62% improvement over the original YOLOv8 network. Compared with the YOLOv10, YOLOv9, YOLOv7, Faster RCNN, and SSD models, the improved YOLOv8 network has increased the mAP value by 3.12%, 4.34%, 5.44%, 16.54%, and 11.29%, respectively, enabling fast and accurate identification of early-stage micro pests in tea gardens. This study proposes an improved YOLOv8 network model based on deep learning for the detection of micro-pests in tea, providing a viable research method and significant reference for addressing the identification of micro-pests in tea. It offers an effective pathway for the high-quality development of Yunnan’s ecological tea industry and ensures the healthy growth of the tea industry.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

He, Jian, Wei Teng, Zeyu Zhao, Binche Liu, Bing Qin, and Jun Jiang. "Research on the Detection of Traffic Flow based on Video Images." Frontiers in Computing and Intelligent Systems 7, no. 2 (March 11, 2024): 75–79. http://dx.doi.org/10.54097/yna4dt18.

Повний текст джерела
Анотація:
Based on the current level of social development, everyone's demand for cars has increased rapidly. At present, the total number of motor vehicles and drivers in China ranks first in the world. With the rapid development of deep learning, the method of vehicle flow statistics based on video can directly use the existing traffic monitoring camera to realize the detection of vehicles, and some traffic flow detection based on YOLOv1, YOLOv2, YOLOv3, YOLOv4 and other algorithms have problems such as insufficient accuracy and low efficiency. Therefore, this paper proposes to use YOLOv5 to replace the original algorithm to achieve object detection, tracking, and processing. I improve the efficiency of the statistics of the traffic flow.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kılıçkaya, Fatma Nur, Murat Taşyürek, and Celal Öztürk. "Performance evaluation of YOLOv5 and YOLOv8 models in car detection." Imaging and Radiation Research 6, no. 2 (July 1, 2024): 5757. http://dx.doi.org/10.24294/irr.v6i2.5757.

Повний текст джерела
Анотація:
Vehicle detection stands out as a rapidly developing technology today and is further strengthened by deep learning algorithms. This technology is critical in traffic management, automated driving systems, security, urban planning, environmental impacts, transportation, and emergency response applications. Vehicle detection, which is used in many application areas such as monitoring traffic flow, assessing density, increasing security, and vehicle detection in automatic driving systems, makes an effective contribution to a wide range of areas, from urban planning to security measures. Moreover, the integration of this technology represents an important step for the development of smart cities and sustainable urban life. Deep learning models, especially algorithms such as You Only Look Once version 5 (YOLOv5) and You Only Look Once version 8 (YOLOv8), show effective vehicle detection results with satellite image data. According to the comparisons, the precision and recall values of the YOLOv5 model are 1.63% and 2.49% higher, respectively, than the YOLOv8 model. The reason for this difference is that the YOLOv8 model makes more sensitive vehicle detection than the YOLOv5. In the comparison based on the F1 score, the F1 score of YOLOv5 was measured as 0.958, while the F1 score of YOLOv8 was measured as 0.938. Ignoring sensitivity amounts, the increase in F1 score of YOLOv8 compared to YOLOv5 was found to be 0.06%.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Gao, Lijun, Xing Zhao, Xishen Yue, Yawei Yue, Xiaoqiang Wang, Huanhuan Wu, and Xuedong Zhang. "A Lightweight YOLOv8 Model for Apple Leaf Disease Detection." Applied Sciences 14, no. 15 (August 1, 2024): 6710. http://dx.doi.org/10.3390/app14156710.

Повний текст джерела
Анотація:
China holds the top position globally in apple production and consumption. Detecting diseases during the planting process is crucial for increasing yields and promoting the rapid development of the apple industry. This study proposes a lightweight algorithm for apple leaf disease detection in natural environments, which is conducive to application on mobile and embedded devices. Our approach modifies the YOLOv8n framework to improve accuracy and efficiency. Key improvements include replacing conventional Conv layers with GhostConv and parts of the C2f structure with C3Ghost, reducing the model’s parameter count, and enhancing performance. Additionally, we integrate a Global attention mechanism (GAM) to improve lesion detection by more accurately identifying affected areas. An improved Bi-Directional Feature Pyramid Network (BiFPN) is also incorporated for better feature fusion, enabling more effective detection of small lesions in complex environments. Experimental results show a 32.9% reduction in computational complexity and a 39.7% reduction in model size to 3.8 M, with performance metrics improving by 3.4% to a mAP@0.5 of 86.9%. Comparisons with popular models like YOLOv7-Tiny, YOLOv6, YOLOv5s, and YOLOv3-Tiny demonstrate that our YOLOv8n–GGi model offers superior detection accuracy, the smallest size, and the best overall performance for identifying critical apple diseases. It can serve as a guide for implementing real-time crop disease detection on mobile and embedded devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Shamsuddin, Mohammad Amyruddin, Wan Nural Jawahir Hj Wan Yussof, Muhammad Suzuri Hitam, Ezmahamrul Afreen Awalludin, Muhammad Afiq-Firdaus Aminudin, and Zainudin Bachok. "Comparison of YOLOv7, YOLOv8, and YOLOv9 for Underwater Coral Reef Fish Detection." Asia-Pacific Journal of Information Technology and Multimedia 13, no. 2 (December 1, 2024): 204–20. http://dx.doi.org/10.17576/apjitm-2024-1302-04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Suewongsuwan, Kamphon, Natchanun Angsuseranee, Prasatporn Wongkamchang, and Khongdet Phasinam. "Comparative analysis of UAV detection and tracking performance: Evaluating YOLOv5, YOLOv8, and YOLOv8 DeepSORT for enhancing anti-UAV systems." Edelweiss Applied Science and Technology 8, no. 5 (September 16, 2024): 708–26. http://dx.doi.org/10.55214/25768484.v8i5.1737.

Повний текст джерела
Анотація:
This article presents a comprehensive comparative analysis of the performance of three prominent object detection and tracking models, namely YOLOv5, YOLOv8, and YOLOv8 DeepSORT, in the domain of UAV detection and tracking. The study aims to assess the effectiveness of these models in enhancing anti-UAV systems. A series of experiments were conducted using diverse datasets and evaluation metrics to evaluate the detection and tracking capabilities of each model. The results provide valuable insights into the strengths and limitations of YOLOv5, YOLOv8, and YOLOv8 DeepSORT, shedding light on their potential applications in anti-UAV systems. The findings of this study contribute to the advancement of UAV detection and tracking technologies and serve as a guide for researchers and practitioners in the field of anti-UAV systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Abdullah, Akram, Gehad Abdullah Amran, S. M. Ahanaf Tahmid, Amerah Alabrah, Ali A. AL-Bakhrani, and Abdulaziz Ali. "A Deep-Learning-Based Model for the Detection of Diseased Tomato Leaves." Agronomy 14, no. 7 (July 22, 2024): 1593. http://dx.doi.org/10.3390/agronomy14071593.

Повний текст джерела
Анотація:
This study introduces a You Only Look Once (YOLO) model for detecting diseases in tomato leaves, utilizing YOLOV8s as the underlying framework. The tomato leaf images, both healthy and diseased, were obtained from the Plant Village dataset. These images were then enhanced, implemented, and trained using YOLOV8s using the Ultralytics Hub. The Ultralytics Hub provides an optimal setting for training YOLOV8 and YOLOV5 models. The YAML file was carefully programmed to identify sick leaves. The results of the detection demonstrate the resilience and efficiency of the YOLOV8s model in accurately recognizing unhealthy tomato leaves, surpassing the performance of both the YOLOV5 and Faster R-CNN models. The results indicate that YOLOV8s attained the highest mean average precision (mAP) of 92.5%, surpassing YOLOV5’s 89.1% and Faster R-CNN’s 77.5%. In addition, the YOLOV8s model is considerably smaller and demonstrates a significantly faster inference speed. The YOLOV8s model has a significantly superior frame rate, reaching 121.5 FPS, in contrast to YOLOV5’s 102.7 FPS and Faster R-CNN’s 11 FPS. This illustrates the lack of real-time detection capability in Faster R-CNN, whereas YOLOV5 is comparatively less efficient than YOLOV8s in meeting these needs. Overall, the results demonstrate that the YOLOV8s model is more efficient than the other models examined in this study for object detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Bektaş, Jale. "Evaluation of YOLOv8 Model Series with HOP for Object Detection in Complex Agriculture Domains." International Journal of Pure and Applied Sciences 10, no. 1 (June 30, 2024): 162–73. http://dx.doi.org/10.29132/ijpas.1448068.

Повний текст джерела
Анотація:
In recent years, many studies have been conducted in-depth investigating YOLO Models for object detection in the field of agriculture. For this reason, this study focused on four datasets containing different agricultural scenarios, and 20 dif-ferent trainings were carried out with the objectives of understanding the detec-tion capabilities of YOLOv8 and HPO (optimization of hyperparameters). While Weed/Crop and Pineapple datasets reached the most accurate measurements with YOLOv8n in mAP score of 0.8507 and 0.9466 respectively, the prominent model for Grapes and Pear datasets was YOLOv8l in mAP score of 0.6510 and 0.9641. This situation shows that multiple-species or in different developmental stages of a single species object YOLO training highlights YOLOv8n, while only object detection extracting from background scenario naturally highlights YOLOv8l Model.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Tan, Shao Xian, Jia You Ong, Kah Ong Michael Goh, and Connie Tee. "Boosting Vehicle Classification with Augmentation Techniques across Multiple YOLO Versions." JOIV : International Journal on Informatics Visualization 8, no. 1 (March 31, 2024): 45. http://dx.doi.org/10.62527/joiv.8.1.2313.

Повний текст джерела
Анотація:
In recent years, computer vision has experienced a surge in applications across various domains, including product and quality inspection, automatic surveillance, and robotics. This study proposes techniques to enhance vehicle object detection and classification using augmentation methods based on the YOLO (You Only Look Once) network. The primary objective of the trained model is to generate a local vehicle detection system for Malaysia which have the capacity to detect vehicles manufactured in Malaysia, adapt to the specific environmental factors in Malaysia, and accommodate varying lighting conditions prevalent in Malaysia. The dataset used for this paper to develop and evaluate the proposed system was provided by a highway company, which captured a comprehensive top-down view of the highway using a surveillance camera. Rigorous manual annotation was employed to ensure accurate annotations within the dataset. Various image augmentation techniques were also applied to enhance the dataset's diversity and improve the system's robustness. Experiments were conducted using different versions of the YOLO network, such as YOLOv5, YOLOv6, YOLOv7, and YOLOv8, each with varying hyperparameter settings. These experiments aimed to identify the optimal configuration for the given dataset. The experimental results demonstrated the superiority of YOLOv8 over other YOLO versions, achieving an impressive mean average precision of 97.9% for vehicle detection. Moreover, data augmentation effectively solves the issues of overfitting and data imbalance while providing diverse perspectives in the dataset. Future research can focus on optimizing computational efficiency for real-time applications and large-scale deployments.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Liu, Yang, Haorui Wang, Yinhui Liu, Yuanyin Luo, Haiying Li, Haifei Chen, Kai Liao, and Lijun Li. "A Trunk Detection Method for Camellia oleifera Fruit Harvesting Robot Based on Improved YOLOv7." Forests 14, no. 7 (July 15, 2023): 1453. http://dx.doi.org/10.3390/f14071453.

Повний текст джерела
Анотація:
Trunk recognition is a critical technology for Camellia oleifera fruit harvesting robots, as it enables accurate and efficient detection and localization of vibration or picking points in unstructured natural environments. Traditional trunk detection methods heavily rely on the visual judgment of robot operators, resulting in significant errors and incorrect vibration point identification. In this paper, we propose a new method based on an improved YOLOv7 network for Camellia oleifera trunk detection. Firstly, we integrate an attention mechanism into the backbone and head layers of YOLOv7, enhancing feature extraction for trunks and enabling the network to focus on relevant target objects. Secondly, we design a weighted confidence loss function based on Facol-EIoU to replace the original loss function in the improved YOLOv7 network. This modification aims to enhance the detection performance specifically for Camellia oleifera trunks. Finally, trunk detection experiments and comparative analyses were conducted with YOLOv3, YOLOv4, YOLOv5, YOLOv7 and improved YOLOv7 models. The experimental results demonstrate that our proposed method achieves an mAP of 89.2%, Recall Rate of 0.94, F1 score of 0.87 and Average Detection Speed of 0.018s/pic that surpass those of YOLOv3, YOLOv4, YOLOv5 and YOLOv7 models. The improved YOLOv7 model exhibits excellent trunk detection accuracy, enabling Camellia oleifera fruit harvesting robots to effectively detect trunks in unstructured orchards.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Inui, Atsuyuki, Yutaka Mifune, Hanako Nishimoto, Shintaro Mukohara, Sumire Fukuda, Tatsuo Kato, Takahiro Furukawa, et al. "Detection of Elbow OCD in the Ultrasound Image by Artificial Intelligence Using YOLOv8." Applied Sciences 13, no. 13 (June 28, 2023): 7623. http://dx.doi.org/10.3390/app13137623.

Повний текст джерела
Анотація:
Background: Screening for elbow osteochondritis dissecans (OCD) using ultrasound (US) is essential for early detection and successful conservative treatment. The aim of the study is to determine the diagnostic accuracy of YOLOv8, a deep-learning-based artificial intelligence model, for US images of OCD or normal elbow-joint images. Methods: A total of 2430 images were used. Using the YOLOv8 model, image classification and object detection were performed to recognize OCD lesions or standard views of normal elbow joints. Results: In the binary classification of normal and OCD lesions, the values from the confusion matrix were the following: Accuracy = 0.998, Recall = 0.9975, Precision = 1.000, and F-measure = 0.9987. The mean average precision (mAP) comparing the bounding box detected by the trained model with the true-label bounding box was 0.994 in the YOLOv8n model and 0.995 in the YOLOv8m model. Conclusions: The YOLOv8 model was trained for image classification and object detection of standard views of elbow joints and OCD lesions. Both tasks were able to be achieved with high accuracy and may be useful for mass screening at medical check-ups for baseball elbow.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

SHAMTA, Ibrahim, and Batıkan Erdem Demir. "Development of a deep learning-based surveillance system for forest fire detection and monitoring using UAV." PLOS ONE 19, no. 3 (March 12, 2024): e0299058. http://dx.doi.org/10.1371/journal.pone.0299058.

Повний текст джерела
Анотація:
This study presents a surveillance system developed for early detection of forest fires. Deep learning is utilized for aerial detection of fires using images obtained from a camera mounted on a designed four-rotor Unmanned Aerial Vehicle (UAV). The object detection performance of YOLOv8 and YOLOv5 was examined for identifying forest fires, and a CNN-RCNN network was constructed to classify images as containing fire or not. Additionally, this classification approach was compared with the YOLOv8 classification. Onboard NVIDIA Jetson Nano, an embedded artificial intelligence computer, is used as hardware for real-time forest fire detection. Also, a ground station interface was developed to receive and display fire-related data. Thus, access to fire images and coordinate information was provided for targeted intervention in case of a fire. The UAV autonomously monitored the designated area and captured images continuously. Embedded deep learning algorithms on the Nano board enable the UAV to detect forest fires within its operational area. The detection methods produced the following results: 96% accuracy for YOLOv8 classification, 89% accuracy for YOLOv8n object detection, 96% accuracy for CNN-RCNN classification, and 89% accuracy for YOLOv5n object detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Sama, Avinash Kaur, and Akashdeep Sharma. "Simulated uav dataset for object detection." ITM Web of Conferences 54 (2023): 02006. http://dx.doi.org/10.1051/itmconf/20235402006.

Повний текст джерела
Анотація:
Unmanned Aerial Vehicles (UAVs) have become increasingly popular for various applications, including object detection. Novel detector algorithms require large datasets to improve, as they are still evolving. Additionally, in countries with restrictive drone policies, simulated datasets can provide a cost-effective and efficient alternative to real-world datasets for researchers to develop and test their algorithms in a safe and controlled environment. To address this, we propose a simulated dataset for object detection through a Gazebo simulator that covers both indoor and outdoor environments. The dataset consists of 11,103 annotated frames with 27,412 annotations, of persons and cars as the objects of interest. This dataset can be used to evaluate detector proposals for object detection, providing a valuable resource for researchers in the field. The dataset is annotated using the Dark Label software, which is a popular tool for object annotation. Additionally, we assessed the dataset’s performance using advanced object detection systems, with YOLOv3 achieving 86.9 mAP50-95, YOLOv3-tiny achieving 79.5 mAP50-95, YOLOv5 achieving 82.2 mAP50-95, YOLOv7 achieving 61.8 mAP50-95 and YOLOv8 achieving 87.8 mAP50-95. Overall, this simulated dataset is a valuable resource for researchers working in the field of object detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії