Статті в журналах з теми "YOLO ALGORITHMS"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: YOLO ALGORITHMS.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "YOLO ALGORITHMS".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wan, Chengjuan, Yuxuan Pang, and Shanzhen Lan. "Overview of YOLO Object Detection Algorithm." International Journal of Computing and Information Technology 2, no. 1 (August 25, 2022): 11. http://dx.doi.org/10.56028/ijcit.1.2.11.

Повний текст джерела
Анотація:
As an important research direction in the field of computer vision, object detection has developed rapidly and many kinds of mature algorithms emerged. The series of YOLO (You Only Look Once) algorithms implement one-stage detection based on regression ideas, which showing preeminent in speed and owning strong generalization on a variety of datasets. This paper will give a simple introduction to the current mainstream deep learning object detection algorithm, then focus on combing the principle and optimizational process of the series of YOLO algorithms, summarize the latest breakthroughs in YOLO algorithm, Hopefully that can provide reference for the research of related topics.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kadhum, Aseil Nadhum, and Aseel Nadhum Kadhum. "Literature Survey on YOLO Models for Face Recognition in Covid-19 Pandemic." June-July 2023, no. 34 (July 29, 2023): 27–35. http://dx.doi.org/10.55529/jipirs.34.27.35.

Повний текст джерела
Анотація:
Artificial Intelligence and robotics the fields in which there is necessary required object detection algorithms. In this study, YOLO and different versions of YOLO are studied to find out advantages of each model as well as limitations of each model. Even in this study, YOLO version similarities and differences are studied. Improvement in the YOLO (You Only Look Once) as well as CNN (Convolutional Neural Network) is the research study present going on for different object detection. In this paper, each YOLO version model is discussed in detail with advantages, limitations and performance. YOLO updated versions such as YOLO v1, YOLO v2, YOLO v3, YOLO v4, YOLO v5 and YOLO v7 are studied and showed superior performance of YOLO v7 over other versions of YOLO algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhou, Xuan, Jianping Yi, Guokun Xie, Yajuan Jia, Genqi Xu, and Min Sun. "Human Detection Algorithm Based on Improved YOLO v4." Information Technology and Control 51, no. 3 (September 23, 2022): 485–98. http://dx.doi.org/10.5755/j01.itc.51.3.30540.

Повний текст джерела
Анотація:
The human behavior datasets have the characteristics of complex background, diverse poses, partial occlusion, and diverse sizes. Firstly, this paper adopts YOLO v3 and YOLO v4 algorithms to detect human objects in videos, and qualitatively analyzes and compares detection performance of two algorithms on UTI, UCF101, HMDB51 and CASIA datasets. Then, this paper proposed an improved YOLO v4 algorithm since the vanilla YOLO v4 has incomplete human detection in specific video frames. Specifically, the improved YOLO v4 introduces the Ghost module in the CBM module to further reduce the number of parameters. Lateral connection is added in the CSP module to improve the feature representation capability of the network. Furthermore, we also substitute MaxPool with SoftPool in the primary SPP module, which not only avoids the feature loss, but also provides a regularization effect for the network, thus improving the generalization ability of the network. Finally, this paper qualitatively compares the detection effects of the improved YOLO v4 and primary YOLO v4 algorithm on specific datasets. The experimental results show that the improved YOLO v4 can solve the problem of complex targets in human detection tasks effectively, and further improve the detection speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Liu, Tao, Bo Pang, Lei Zhang, Wei Yang, and Xiaoqiang Sun. "Sea Surface Object Detection Algorithm Based on YOLO v4 Fused with Reverse Depthwise Separable Convolution (RDSC) for USV." Journal of Marine Science and Engineering 9, no. 7 (July 7, 2021): 753. http://dx.doi.org/10.3390/jmse9070753.

Повний текст джерела
Анотація:
Unmanned surface vehicles (USVs) have been extensively used in various dangerous maritime tasks. Vision-based sea surface object detection algorithms can improve the environment perception abilities of USVs. In recent years, the object detection algorithms based on neural networks have greatly enhanced the accuracy and speed of object detection. However, the balance between speed and accuracy is a difficulty in the application of object detection algorithms for USVs. Most of the existing object detection algorithms have limited performance when they are applied in the object detection technology for USVs. Therefore, a sea surface object detection algorithm based on You Only Look Once v4 (YOLO v4) was proposed. Reverse Depthwise Separable Convolution (RDSC) was developed and applied to the backbone network and feature fusion network of YOLO v4. The number of weights of the improved YOLO v4 is reduced by more than 40% compared with the original number. A large number of ablation experiments were conducted on the improved YOLO v4 in the sea ship dataset SeaShips and a buoy dataset SeaBuoys. The experimental results showed that the detection speed of the improved YOLO v4 increased by more than 20%, and mAP increased by 1.78% and 0.95%, respectively, in the two datasets. The improved YOLO v4 effectively improved the speed and accuracy in the sea surface object detection task. The improved YOLO v4 algorithm fused with RDSC has a smaller network size and better real-time performance. It can be easily applied in the hardware platforms with weak computing power and has shown great application potential in the sea surface object detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chen, Xin, Peng Shi, and Yi Hu. "A Precise Semantic Segmentation Model for Seabed Sediment Detection Using YOLO-C." Journal of Marine Science and Engineering 11, no. 7 (July 24, 2023): 1475. http://dx.doi.org/10.3390/jmse11071475.

Повний текст джерела
Анотація:
Semantic segmentation methods have been successfully applied in seabed sediment detection. However, fast models like YOLO only produce rough segmentation boundaries (rectangles), while precise models like U-Net require too much time. In order to achieve fast and precise semantic segmentation results, this paper introduces a novel model called YOLO-C. It utilizes the full-resolution classification features of the semantic segmentation algorithm to generate more accurate regions of interest, enabling rapid separation of potential targets and achieving region-based partitioning and precise object boundaries. YOLO-C surpasses existing methods in terms of accuracy and detection scope. Compared to U-Net, it achieves an impressive 15.17% improvement in mean pixel accuracy (mPA). With a processing speed of 98 frames per second, YOLO-C meets the requirements of real-time detection and provides accurate size estimation through segmentation. Furthermore, it achieves a mean average precision (mAP) of 58.94% and a mean intersection over union (mIoU) of 70.36%, outperforming industry-standard algorithms such as YOLOX. Because of the good performance in both rapid processing and high precision, YOLO-C can be effectively utilized in real-time seabed exploration tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cong, Xiaohan, Shixin Li, Fankai Chen, Chen Liu, and Yue Meng. "A Review of YOLO Object Detection Algorithms based on Deep Learning." Frontiers in Computing and Intelligent Systems 4, no. 2 (June 25, 2023): 17–20. http://dx.doi.org/10.54097/fcis.v4i2.9730.

Повний текст джерела
Анотація:
Object detection is a research hotspot in the field of computer vision, and YOLO series shows good performance in object detection, and has been widely used in robot vision, unmanned driving and other fields in recent years. This paper first introduces the YOLO series algorithm, including the principle, innovation points, advantages and disadvantages of various algorithms, then introduces the application field of YOLO series, and finally analyzes its future development trend to provide reference for the topic research.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Karmakar, Malay. "Face Recognition Technique using YOLO V5 Algorithm." International Research Journal of Computer Science 10, no. 03 (March 31, 2023): 04–12. http://dx.doi.org/10.26562/irjcs.2023.v1002.01.

Повний текст джерела
Анотація:
In today’s date real world application like human-machine interaction, security surveillance face recognition has made its great importance. For face recognition the steps to be followed are data collection, preprocessing, Feature Extraction, Training Evaluation and finally testing. One of the best Algorithms used for face recognition is Viola-Jones Algorithm. Viola Jones Algorithm is highly accepted because of its fast processing time and high detection rate. The other detection Algorithms which can be used are HOG Algorithm (Histogram Oriented Gradient), Deep Learning CNN (Convolution Neural Network) Algorithm, Haar cascade Algorithm and MTCNN Algorithm. In this Paper we will discuss about all this Algorithms and compare the detection rate of facial points among various methods for the datasets. YOLO V5 algorithm is another algorithm which can be used for face recognition technique. Face recognition can be achieved by combining the YOLO V5 algorithm with additional technique such as deep face recognition models like face net, which can recognize and compare facial features to identify individuals.YOLOV5 can identify facial features such as eyes, mouth, and nose, making it useful for a variety of face recognition applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gao, Ruizhen, Shuai Zhang, Haoqian Wang, Jingjun Zhang, Hui Li, and Zhongqi Zhang. "The Aeroplane and Undercarriage Detection Based on Attention Mechanism and Multi-Scale Features Processing." Mobile Information Systems 2022 (September 19, 2022): 1–12. http://dx.doi.org/10.1155/2022/2582288.

Повний текст джерела
Анотація:
Undercarriage device is one of the essential parts of an aeroplane, and accurate detection of whether the aeroplane undercarriage is operating normally can effectively avoid aeroplane accidents. To address the problems of low automation and low accuracy of small target detection in existing aeroplane undercarriage detection methods, an improved algorithm for aeroplane undercarriage detection YOLO V4 is proposed. Firstly, the convolutional network structure of Inception-ResNet is integrated into the CSPDarkNet53 framework to improve the algorithm’s ability to extract semantic information of target features; then an attention mechanism is added to the path aggregation network algorithm structure to improve the importance and relevance of different features after conceptual operations. In addition, aeroplane and undercarriage datasets were constructed, and finally, the generated partitioned test sets were tested to evaluate the test performance of Faster R-CNN, YOLO V3, and YOLO V4 target detection algorithms. The experimental results show that the improved algorithm has significantly improved the recall rate and the mean accuracy of detection for small targets in our dataset compared with the YOLO V4 algorithm. The reasonableness and advancedness of the improved algorithm in this paper are effectively verified.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Jiayi, Xingfei Zhu, Xingyu Zhou, Shanhua Qian, and Jinghu Yu. "Defect Detection for Metal Base of TO-Can Packaged Laser Diode Based on Improved YOLO Algorithm." Electronics 11, no. 10 (May 13, 2022): 1561. http://dx.doi.org/10.3390/electronics11101561.

Повний текст джерела
Анотація:
Defect detection is an important part of the manufacturing process of mechanical products. In order to detect the appearance defects quickly and accurately, a method of defect detection for the metal base of TO-can packaged laser diode (metal TO-base) based on the improved You Only Look Once (YOLO) algorithm named YOLO-SO is proposed in this study. Firstly, convolutional block attention mechanism (CBAM) module was added to the convolutional layer of the backbone network. Then, a random-paste-mosaic (RPM) small object data augmentation module was proposed on the basis of Mosaic algorithm in YOLO-V5. Finally, the K-means++ clustering algorithm was applied to reduce the sensitivity to the initial clustering center, making the positioning more accurate and reducing the network loss. The proposed YOLO-SO model was compared with other object detection algorithms such as YOLO-V3, YOLO-V4, and Faster R-CNN. Experimental results demonstrated that the YOLO-SO model reaches 84.0% mAP, 5.5% higher than the original YOLO-V5 algorithm. Moreover, the YOLO-SO model had clear advantages in terms of the smallest weight size and detection speed of 25 FPS. These advantages make the YOLO-SO model more suitable for the real-time detection of metal TO-base appearance defects.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Zhuang, Jianhui Yuan, Guixiang Li, Hao Wang, Xingcan Li, Dan Li, and Xinhua Wang. "RSI-YOLO: Object Detection Method for Remote Sensing Images Based on Improved YOLO." Sensors 23, no. 14 (July 14, 2023): 6414. http://dx.doi.org/10.3390/s23146414.

Повний текст джерела
Анотація:
With the continuous development of deep learning technology, object detection has received extensive attention across various computer fields as a fundamental task of computational vision. Effective detection of objects in remote sensing images is a key challenge, owing to their small size and low resolution. In this study, a remote sensing image detection (RSI-YOLO) approach based on the YOLOv5 target detection algorithm is proposed, which has been proven to be one of the most representative and effective algorithms for this task. The channel attention and spatial attention mechanisms are used to strengthen the features fused by the neural network. The multi-scale feature fusion structure of the original network based on a PANet structure is improved to a weighted bidirectional feature pyramid structure to achieve more efficient and richer feature fusion. In addition, a small object detection layer is added, and the loss function is modified to optimise the network model. The experimental results from four remote sensing image datasets, such as DOTA and NWPU-VHR 10, indicate that RSI-YOLO outperforms the original YOLO in terms of detection performance. The proposed RSI-YOLO algorithm demonstrated superior detection performance compared to other classical object detection algorithms, thus validating the effectiveness of the improvements introduced into the YOLOv5 algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Wu, Wentong, Han Liu, Lingling Li, Yilin Long, Xiaodong Wang, Zhuohua Wang, Jinglun Li, and Yi Chang. "Application of local fully Convolutional Neural Network combined with YOLO v5 algorithm in small target detection of remote sensing image." PLOS ONE 16, no. 10 (October 29, 2021): e0259283. http://dx.doi.org/10.1371/journal.pone.0259283.

Повний текст джерела
Анотація:
This exploration primarily aims to jointly apply the local FCN (fully convolution neural network) and YOLO-v5 (You Only Look Once-v5) to the detection of small targets in remote sensing images. Firstly, the application effects of R-CNN (Region-Convolutional Neural Network), FRCN (Fast Region-Convolutional Neural Network), and R-FCN (Region-Based-Fully Convolutional Network) in image feature extraction are analyzed after introducing the relevant region proposal network. Secondly, YOLO-v5 algorithm is established on the basis of YOLO algorithm. Besides, the multi-scale anchor mechanism of Faster R-CNN is utilized to improve the detection ability of YOLO-v5 algorithm for small targets in the image in the process of image detection, and realize the high adaptability of YOLO-v5 algorithm to different sizes of images. Finally, the proposed detection method YOLO-v5 algorithm + R-FCN is compared with other algorithms in NWPU VHR-10 data set and Vaihingen data set. The experimental results show that the YOLO-v5 + R-FCN detection method has the optimal detection ability among many algorithms, especially for small targets in remote sensing images such as tennis courts, vehicles, and storage tanks. Moreover, the YOLO-v5 + R-FCN detection method can achieve high recall rates for different types of small targets. Furthermore, due to the deeper network architecture, the YOL v5 + R-FCN detection method has a stronger ability to extract the characteristics of image targets in the detection of remote sensing images. Meanwhile, it can achieve more accurate feature recognition and detection performance for the densely arranged target images in remote sensing images. This research can provide reference for the application of remote sensing technology in China, and promote the application of satellites for target detection tasks in related fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wei, Jian, Qinzhao Wang, and Zixu Zhao. "YOLO-G: Improved YOLO for cross-domain object detection." PLOS ONE 18, no. 9 (September 11, 2023): e0291241. http://dx.doi.org/10.1371/journal.pone.0291241.

Повний текст джерела
Анотація:
Cross-domain object detection is a key problem in the research of intelligent detection models. Different from lots of improved algorithms based on two-stage detection models, we try another way. A simple and efficient one-stage model is introduced in this paper, comprehensively considering the inference efficiency and detection precision, and expanding the scope of undertaking cross-domain object detection problems. We name this gradient reverse layer-based model YOLO-G, which greatly improves the object detection precision in cross-domain scenarios. Specifically, we add a feature alignment branch following the backbone, where the gradient reverse layer and a classifier are attached. With only a small increase in computational, the performance is higher enhanced. Experiments such as Cityscapes→Foggy Cityscapes, SIM10k→Cityscape, PASCAL VOC→Clipart, and so on, indicate that compared with most state-of-the-art (SOTA) algorithms, the proposed model achieves much better mean Average Precision (mAP). Furthermore, ablation experiments were also performed on 4 components to confirm the reliability of the model. The project is available at https://github.com/airy975924806/yolo-G.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Li, Yanyi, Jian Wang, Jin Huang, and Yuping Li. "Research on Deep Learning Automatic Vehicle Recognition Algorithm Based on RES-YOLO Model." Sensors 22, no. 10 (May 16, 2022): 3783. http://dx.doi.org/10.3390/s22103783.

Повний текст джерела
Анотація:
With the introduction of concepts such as ubiquitous mapping, mapping-related technologies are gradually applied in autonomous driving and target recognition. There are many problems in vision measurement and remote sensing, such as difficulty in automatic vehicle discrimination, high missing rates under multiple vehicle targets, and sensitivity to the external environment. This paper proposes an improved RES-YOLO detection algorithm to solve these problems and applies it to the automatic detection of vehicle targets. Specifically, this paper improves the detection effect of the traditional YOLO algorithm by selecting optimized feature networks and constructing adaptive loss functions. The BDD100K data set was used for training and verification. Additionally, the optimized YOLO deep learning vehicle detection model is obtained and compared with recent advanced target recognition algorithms. Experimental results show that the proposed algorithm can automatically identify multiple vehicle targets effectively and can significantly reduce missing and false rates, with the local optimal accuracy of up to 95% and the average accuracy above 86% under large data volume detection. The average accuracy of our algorithm is higher than all five other algorithms including the latest SSD and Faster-RCNN. In average accuracy, the RES-YOLO algorithm for small data volume and large data volume is 1.0% and 1.7% higher than the original YOLO. In addition, the training time is shortened by 7.3% compared with the original algorithm. The network is then tested with five types of local measured vehicle data sets and shows satisfactory recognition accuracy under different interference backgrounds. In short, the method in this paper can complete the task of vehicle target detection under different environmental interferences.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Yang, Zhonglai. "Intelligent Recognition of Traffic Signs Based on Improved YOLO v3 Algorithm." Mobile Information Systems 2022 (September 20, 2022): 1–11. http://dx.doi.org/10.1155/2022/7877032.

Повний текст джерела
Анотація:
In recent years, assisted driving and autonomous driving technology have been paid more attention to by the public. Road sign recognition is of great practical significance for the realization of auto-driving technology. In the actual traffic environment, the traffic signs have the problems of small detectable volume, low resolution, unclear characteristics, and easy to be disturbed by the environment. In order to better realize road traffic sign recognition, this paper improves and optimizes the YOLO v3 network derived from YOLO v3 structure algorithm, enhances the data of the traffic signs by using color enhancement and other technologies, and improves the original FPN structure of the YOLO v3 network algorithm to 52 × 52. Then, the secondary sampling output characteristic diagram 108 in the YOLO v3 network is used × 108 solutions to solve these difficulties of picture size and image distortion. Use 5, 9, and 13 fixed-size pools in front of the surface of the control architecture, then the output characteristics are associated with the original characteristics of the picture so that inputs of different sizes can obtain the same output. Finally, we use the intermediate class K algorithm to group the TT100K landmark data set, reconsider the original network parameters, and compare the TT100K data set with the small target determination algorithm, such as YOLO v3 network model and improved YOLO v3 network model. The results show that compared with the traditional YOLO v3 algorithm, the optimized YOLO v3 road sign recognition algorithm has a significant improvement in sign recognition accuracy, sign recognition speed, and learning cost. When the change of FPS is very small, the recall rate and accuracy will be greatly improved. At the same time, compared with other small target detection algorithms, the improved YOLO v3 algorithm has more accurate and faster detection accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

He, Guowen, Wenlong Wang, Bowen Shi, Shijie Liu, Hui Xiang, and Xiaoyuan Wang. "An Improved YOLO v4 Algorithm-based Object Detection Method for Maritime Vessels." International Journal of Science and Engineering Applications 11, no. 04 (April 2022): 50–55. http://dx.doi.org/10.7753/ijsea1104.1001.

Повний текст джерела
Анотація:
Ship object detection is the core part of the maritime intelligent ship safety assistance technology, which plays a crucial role in ship safety. The object detection algorithm based on the convolutional neural network has greatly improved the accuracy and speed of object detection, which YOLO algorithm stands out among the object detection algorithms with more excellent robustness, detection accuracy, and real-time performance. Based on the YOLO v4 algorithm, this study uses the k-means algorithm to improve clustering at the input side of image data and introduces relevant berth data in the self-organized dataset to achieve detection of ships and berths for the lack of detection of berths in the existing ship detection algorithm. The experimental results show that the mAP and F1-score of the improved YOLO v4 are increased by 2.79% and 0.80%, respectively. The improved YOLO v4 algorithm effectively improves the accuracy of ship object detection, and the in-port berth also achieves better detection results and improves the ship environment perception, which is important in assisting berthing and unberthing.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hu, Xiao, Shenfu Pan, Dongdong Li, Long Feng, and Yuan Zhao. "An airborne object detection and location system based on deep inference." Journal of Physics: Conference Series 2632, no. 1 (November 1, 2023): 012019. http://dx.doi.org/10.1088/1742-6596/2632/1/012019.

Повний текст джерела
Анотація:
Abstract In recent years, with the development of sensors, communication networks, and deep learning, drones have been widely used in the field of object detection, tracking, and positioning. However, there are inefficient task execution and some complex algorithms still need to rely on large servers, which is intolerable in rescue and traffic scheduling tasks. Designing fast algorithms that can run on the airborne computer can effectively solve the problem. In this paper, an object detection and location system for drones is proposed. We combine the improved object detection algorithm ST-YOLO based on YOLOX and Swin Transformer with the visual positioning algorithm and deploy it on the airborne end by using TensorRT to realize the detection and location of objects during the flight of the drone. Field experiments show that the established system and algorithm are effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Asif, Muhammad, Tabarka Rajab, Samreen Hussain, Munaf Rashid, Sarwar Wasi, Areeb Ahmed, and Kehkashan Kanwal. "Performance Evaluation of Deep Learning Algorithm Using High-End Media Processing Board in Real-Time Environment." Journal of Sensors 2022 (December 7, 2022): 1–13. http://dx.doi.org/10.1155/2022/6335118.

Повний текст джерела
Анотація:
Image processing-based artificial intelligence algorithm is a critical task, and the implementation requires a careful examination for the selection of the algorithm and the processing unit. With the advancement of technology, researchers have developed many algorithms to achieve high accuracy at minimum processing requirements. On the other hand, cost-effective high-end graphical processing units (GPUs) are now available to handle complex processing tasks. However, the optimum configurations of the various deep learning algorithms implemented on GPUs are yet to be investigated. In this proposed work, we have tested a Convolution Neural Network (CNN) based on You Only Look Once (YOLO) variants on NVIDIA Jetson Xavier to identify compatibility between the GPU and the YOLO models. Furthermore, the performance of the YOLOv3, YOLOv3-tiny, YOLOv4, and YOLOv5s models is evaluated during the training using our PowerEdge Dell R740 Server. We have successfully demonstrated that YOLOV5s is a good benchmark for object detection, classification, and traffic congestion using the Jetson Xavier GPU board. The YOLOv5s achieved an average precision of 95.9% among all YOLO variants and the highest success rate achieved is 98.89.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Dewi, Christine, and Henoch Juli Christanto. "Combination of Deep Cross-Stage Partial Network and Spatial Pyramid Pooling for Automatic Hand Detection." Big Data and Cognitive Computing 6, no. 3 (August 9, 2022): 85. http://dx.doi.org/10.3390/bdcc6030085.

Повний текст джерела
Анотація:
The human hand is involved in many computer vision tasks, such as hand posture estimation, hand movement identification, human activity analysis, and other similar tasks, in which hand detection is an important preprocessing step. It is still difficult to correctly recognize some hands in a cluttered environment because of the complex display variations of agile human hands and the fact that they have a wide range of motion. In this study, we provide a brief assessment of CNN-based object identification algorithms, specifically Densenet Yolo V2, Densenet Yolo V2 CSP, Densenet Yolo V2 CSP SPP, Resnet 50 Yolo V2, Resnet 50 CSP, Resnet 50 CSP SPP, Yolo V4 SPP, Yolo V4 CSP SPP, and Yolo V5. The advantages of CSP and SPP are thoroughly examined and described in detail in each algorithm. We show in our experiments that Yolo V4 CSP SPP provides the best level of precision available. The experimental results show that the CSP and SPP layers help improve the accuracy of CNN model testing performance. Our model leverages the advantages of CSP and SPP. Our proposed method Yolo V4 CSP SPP outperformed previous research results by an average of 8.88%, with an improvement from 87.6% to 96.48%.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Guo, Hao, and Jiahua Yang. "An improved object detection algorithm based on YOLO*." Journal of Physics: Conference Series 2216, no. 1 (March 1, 2022): 012070. http://dx.doi.org/10.1088/1742-6596/2216/1/012070.

Повний текст джерела
Анотація:
Abstract Accuracy and speed have always been a measure of the performance of object detection algorithms. The current algorithms have reduced the detection speed on the basis of increasing a certain accuracy due to their complex structure. In response to this problem, this paper uses the RepVGG network to improve the original YOLOv3 structure, which uses a diversified branch structure to enhance the network feature extraction ability during training and transforms the training model into an equivalent VGG-like topology network model during inference. In addition, we use ASFF to deal with the problem of mutually restricted scales. Experiments show that compared with the original algorithm, the improved algorithm increases mAP by 0.51 on the VOC data set, and at the same time the speed increases by 10%.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Nori, Ruba R., Rabah N. Farhan, and Safaa Hussein Abed. "Indoor and Outdoor Fire Localization Using YOLO Algorithm." Journal of Physics: Conference Series 2114, no. 1 (December 1, 2021): 012067. http://dx.doi.org/10.1088/1742-6596/2114/1/012067.

Повний текст джерела
Анотація:
Abstract Novel algorithm for fire detection has been introduced. CNN based System localization of fire for real time applications was proposed. Deep learning algorithms shows excellent results in a way that it accuracy reaches very high accuracy for fire image dataset. Yolo is a superior deep learning algorithm that is capable of detect and localize fires in real time. The luck of image dataset force us to limit the system in binary classification test. Proposed model was tested on dataset gathered from the internet. In this article, we built an automated alert system integrating multiple sensors and state-of-the art deep learning algorithms, which have a limited number of false positive elements and which provide our prototype robot with reasonable accuracy in real-time data and as little as possible to track and record fire events as soon as possible.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ting-Na Liu, Ting-Na Liu, Zhong-Jie Zhu Ting-Na Liu, Yong-Qiang Bai Zhong-Jie Zhu, Guang-Long Liao Yong-Qiang Bai, and Yin-Xue Chen Guang-Long Liao. "YOLO-Based Efficient Vehicle Object Detection." 電腦學刊 33, no. 4 (August 2022): 069–79. http://dx.doi.org/10.53106/199115992022083304006.

Повний текст джерела
Анотація:
<p>Vehicle detection is one of the key techniques of intelligent transportation system with high requirements for accuracy and real-time. However, the existing algorithms suffer from the contradiction between detec-tion speed and detection accuracy, and weak generalization ability. To address these issues, an improved vehicle detection algorithm is presented based on the You Only Look Once (YOLO). On the one hand, an efficient feature extraction network is restructured to speed up the feature transfer of the object, and re-use the feature information extracted from the input image. On the other hand, considering that the fewer pixels are occupied for the smaller objects, a novel feature fusion network is designed to fuse the seman-tic information and representation information extracted by different depth feature extraction layers, and ultimately improve the detection accuracy of small and medium objects. Experiment results indicate that the mean Average Precision (mAP) of the proposed algorithm is up to 93.87%, which is 11.51%, 18.56% and 20.42% higher than that of YOLOv3, CornerNet, and Faster R-CNN, respectively. Furthermore, its detection speed can meet the real-time requirement of practical application basically with 49.45 frames per second.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Mou, Chao, Tengfei Liu, Chengcheng Zhu, and Xiaohui Cui. "WAID: A Large-Scale Dataset for Wildlife Detection with Drones." Applied Sciences 13, no. 18 (September 17, 2023): 10397. http://dx.doi.org/10.3390/app131810397.

Повний текст джерела
Анотація:
Drones are widely used for wildlife monitoring. Deep learning algorithms are key to the success of monitoring wildlife with drones, although they face the problem of detecting small targets. To solve this problem, we have introduced the SE-YOLO model, which incorporates a channel self-attention mechanism into the advanced real-time object detection algorithm YOLOv7, enabling the model to perform effectively on small targets. However, there is another barrier; the lack of publicly available UAV wildlife aerial datasets hampers research on UAV wildlife monitoring algorithms. To fill this gap, we present a large-scale, multi-class, high-quality dataset called WAID (Wildlife Aerial Images from Drone), which contains 14,375 UAV aerial images from different environmental conditions, covering six wildlife species and multiple habitat types. We conducted a statistical analysis experiment, an algorithm detection comparison experiment, and a dataset generalization experiment. The statistical analysis experiment demonstrated the dataset characteristics both quantitatively and intuitively. The comparison and generalization experiments compared different types of advanced algorithms as well as the SE-YOLO method from the perspective of the practical application of UAVs for wildlife monitoring. The experimental results show that WAID is suitable for the study of wildlife monitoring algorithms for UAVs, and SE-YOLO is the most effective in this scenario, with a mAP of up to 0.983. This study brings new methods, data, and inspiration to the field of wildlife monitoring by UAVs.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Liu, Yu, Tong Zhou, Jingye Xu, Yu Hong, Qianhui Pu, and Xuguang Wen. "Rotating Target Detection Method of Concrete Bridge Crack Based on YOLO v5." Applied Sciences 13, no. 20 (October 10, 2023): 11118. http://dx.doi.org/10.3390/app132011118.

Повний текст джерела
Анотація:
Crack detection is a critical and essential aspect of concrete bridge maintenance and management. Manual inspection often falls short in meeting the demands of large-scale crack detection in terms of cost, efficiency, accuracy, and data management. To address the challenges faced by existing generic object detection algorithms in achieving high accuracy or efficiency when detecting cracks with large aspect ratios, overlapping structures, and clear directional characteristics, this paper presents improvements to the YOLO v5 model. These enhancements include the introduction of angle regression variables, the definition of a new loss function, the integration of PSA-Neck and ECA-Layer attention mechanism modules into the network architecture, consideration of the contribution of each node’s features to the network, and the addition of skip connections within the same feature scale. This results in a novel crack image rotation object detection algorithm named “R-YOLO v5”. After training the R-YOLO v5 model for 300 iterations on a dataset comprising 1628 crack images, the model achieved an mAP@0.5 of 94.03% on the test set, which is significantly higher than other rotation object detection algorithms such as SASM, S2A Net, Re Det, as well as the horizontal-box YOLO v5 model. Furthermore, R-YOLO v5 demonstrates clear advantages in terms of model size (4.17 MB) and detection speed (0.01 s per image). These results demonstrate that the designed model effectively detects cracks in concrete bridges and exhibits robustness, minimal memory usage, making it suitable for real-time crack detection on small devices like smartphones or drones. Additionally, the rotation object detection improvement strategy discussed in this study holds potential applicability for enhancing other object detection algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Xu, Danqing, and Yiquan Wu. "MRFF-YOLO: A Multi-Receptive Fields Fusion Network for Remote Sensing Target Detection." Remote Sensing 12, no. 19 (September 23, 2020): 3118. http://dx.doi.org/10.3390/rs12193118.

Повний текст джерела
Анотація:
High-altitude remote sensing target detection has problems related to its low precision and low detection rate. In order to enhance the performance of detecting remote sensing targets, a new YOLO (You Only Look Once)-V3-based algorithm was proposed. In our improved YOLO-V3, we introduced the concept of multi-receptive fields to enhance the performance of feature extraction. Therefore, the proposed model was termed Multi-Receptive Fields Fusion YOLO (MRFF-YOLO). In addition, to address the flaws of YOLO-V3 in detecting small targets, we increased the detection layers from three to four. Moreover, in order to avoid gradient fading, the structure of improved DenseNet was chosen in the detection layers. We compared our approach (MRFF-YOLO) with YOLO-V3 and other state-of-the-art target detection algorithms on an Remote Sensing Object Detection (RSOD) dataset and a dataset of Object Detection in Aerial Images (UCS-AOD). With a series of improvements, the mAP (mean average precision) of MRFF-YOLO increased from 77.10% to 88.33% in the RSOD dataset and increased from 75.67% to 90.76% in the UCS-AOD dataset. The leaking detection rates are also greatly reduced, especially for small targets. The experimental results showed that our approach achieved better performance than traditional YOLO-V3 and other state-of-the-art models for remote sensing target detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhao, Jing. "A Real-Time Detection Algorithm of Flame Target Image." Computational Intelligence and Neuroscience 2022 (October 11, 2022): 1–8. http://dx.doi.org/10.1155/2022/5277805.

Повний текст джерела
Анотація:
In many research tasks, the speed and accuracy of flame detection using supply chain have always been a challenging task for many researchers, especially for flame detection of small objects in supply chain. In view of this, we propose a new real-time target detection algorithm. The first step is to enhance the flame recognition of small objects by strengthening the feature extraction ability of multi-scale fusion. The second step is to introduce the K-means clustering method into the prior bounding box of the algorithm to improve the accuracy of the algorithm. The third step is to use the flame characteristics in YOLO+ algorithm to reject the wrong detection results and increase the detection effect of the algorithm. Compared with the YOLO series algorithms, the accuracy of YOLO+ algorithm is 99.5%, the omission rate is 1.3%, and the detection speed is 72 frames/SEC. It has good performance and is suitable for flame detection tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kadhum, Aseil Nahum, and Aseel Nahum Kadhum. "Comparison Between the Yolov4 and Yolov5 Models in Detecting Faces while Wearing a Mask." International Academic Journal of Science and Engineering 11, no. 1 (January 6, 2024): 01–08. http://dx.doi.org/10.9756/iajse/v11i1/iajse1101.

Повний текст джерела
Анотація:
Object detection based on deep learning has shown good results ever since the Coronavirus or Covid-19 started sweeping the entire world affecting and killing many people. One of the easiest and simplest ways to protect oneself from this virus is to wear a mask. In order to detect whether a person is wearing a mask or not, we propose here two models for detecting face masks. Facial recognition has been difficult, but with the development of deep learning, it has tremendous ability to detect objects, especially in public places. Therefore, it has become necessary for accurate diagnosis to protect people from Covid-19. In order to discover whether a person is wearing a mask or not, we propose a model to detect face masks whether the mask is worn or not, so it was proposed in this research to use two deep learning algorithms, which are YOLOv5 and YOLOv4, which are among the YOLO models that are characterized by accuracy and speed. Compare learning algorithms Deep and finding the difference between them in performance and accuracy, and there is a CNN algorithm that is also important in discovering things and achieved satisfactory results, but we will use the YOLO model. YOLO is an advanced algorithm with fast, real-time detection. As most of the results were reviewed with previous variations of YOLO and CNN, it is worth noting that the YOLO model is the best model in face detection. Face detection is of great importance in various fields, especially in public places, and requires security accuracy in detection. It is known that investigators' statements about images of masks are not very easy. Therefore, training and evaluation on the dataset available on Google Colab for YOLOv5 and YOLOv4 algorithm are conducted in this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Abhyankar, Vaishnavi, and Rashmi Kene. "Generation of RBC, WBC Subtype, and Platelet Count Report Using YOLO." International Journal for Research in Applied Science and Engineering Technology 11, no. 4 (April 30, 2023): 1623–30. http://dx.doi.org/10.22214/ijraset.2023.50459.

Повний текст джерела
Анотація:
Abstract: Artificial intelligence introduced a way to combine machines’ computing ability with human intelligence. Machine learning is a sub-branch of AI that consists of different algorithms to implement concepts of AI in practical terms. But when a machine has to process numerous amounts of data, deep learning algorithms come into the picture. It is observed that when a computing system has to deal with image data then neural network algorithms give efficient ways to process them and draw unique patterns from them. Object detection is the task of identifying required objects from an image. This type of technology plays a crucial role in medical image processing. Some algorithms can efficiently identify and classify objects from an image. It is observed that Fast R-CNN and Faster R-CNN, mask R-CNN, have given pretty good accuracy while performing such tasks. But when time is a concern for a system, such methods put an obstacle of training time and architectural complexity. The You Only Look Once (YOLO) object detection method has introduced a new way of processing images in a single pass. This algorithm is famous because of its speed and correctness. There are numerous models of YOLO developed now which include YOLOv1 to YOLOV8. This paper gives the performance of the latest version of YOLO on the blood cell dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Zhang, Meiyan, Dongyang Zhao, Cailiang Sheng, Ziqiang Liu, and Wenyu Cai. "Long-Strip Target Detection and Tracking with Autonomous Surface Vehicle." Journal of Marine Science and Engineering 11, no. 1 (January 5, 2023): 106. http://dx.doi.org/10.3390/jmse11010106.

Повний текст джерела
Анотація:
As we all know, target detection and tracking are of great significance for marine exploration and protection. In this paper, we propose one Convolutional-Neural-Network-based target detection method named YOLO-Softer NMS for long-strip target detection on the water, which combines You Only Look Once (YOLO) and Softer NMS algorithms to improve detection accuracy. The traditional YOLO network structure is improved, the prediction scale is increased from threeto four, and a softer NMS strategy is used to select the original output of the original YOLO method. The performance improvement is compared totheFaster-RCNN algorithm and traditional YOLO methodin both mAP and speed, and the proposed YOLO–Softer NMS’s mAP reaches 97.09%while still maintaining the same speed as YOLOv3. In addition, the camera imaging model is used to obtain accurate target coordinate information for target tracking. Finally, using the dicyclic loop PID control diagram, the Autonomous Surface Vehicle is controlled to approach the long-strip target with near-optimal path design. The actual test results verify that our long-strip target detection and tracking method can achieve gratifying long-strip target detection and tracking results.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Chen, Wen, Chengwei Ju, Yanzhou Li, Shanshan Hu, and Xi Qiao. "Sugarcane Stem Node Recognition in Field by Deep Learning Combining Data Expansion." Applied Sciences 11, no. 18 (September 17, 2021): 8663. http://dx.doi.org/10.3390/app11188663.

Повний текст джерела
Анотація:
The rapid and accurate identification of sugarcane stem nodes in the complex natural environment is essential for the development of intelligent sugarcane harvesters. However, traditional sugarcane stem node recognition has been mainly based on image processing and recognition technology, where the recognition accuracy is low in a complex natural environment. In this paper, an object detection algorithm based on deep learning was proposed for sugarcane stem node recognition in a complex natural environment, and the robustness and generalisation ability of the algorithm were improved by the dataset expansion method to simulate different illumination conditions. The impact of the data expansion and lighting condition in different time periods on the results of sugarcane stem nodes detection was discussed, and the superiority of YOLO v4, which performed best in the experiment, was verified by comparing it with four different deep learning algorithms, namely Faster R-CNN, SSD300, RetinaNet and YOLO v3. The comparison results showed that the AP (average precision) of the sugarcane stem nodes detected by YOLO v4 was 95.17%, which was higher than that of the other four algorithms (78.87%, 88.98%, 90.88% and 92.69%, respectively). Meanwhile, the detection speed of the YOLO v4 method was 69 f/s and exceeded the requirement of a real-time detection speed of 30 f/s. The research shows that it is a feasible method for real-time detection of sugarcane stem nodes in a complex natural environment. This research provides visual technical support for the development of intelligent sugarcane harvesters.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Park, Jungsu, Jiwon Baek, Jongrack Kim, Kwangtae You, and Keugtae Kim. "Deep Learning-Based Algal Detection Model Development Considering Field Application." Water 14, no. 8 (April 14, 2022): 1275. http://dx.doi.org/10.3390/w14081275.

Повний текст джерела
Анотація:
Algal blooms have various effects on drinking water supply systems; thus, proper monitoring is essential. Traditional visual identification using a microscope is a time-consuming method and requires extensive labor. Recently, advanced machine learning algorithms have been increasingly applied for the development of object detection models. The You-Only-Look-Once (YOLO) model is a novel machine learning algorithm used for object detection; it has been continuously improved in newer versions, and a tiny version of each standard model presented. The tiny versions applied a less complicated architecture using a smaller number of convolutional layers to enable faster object detection than the standard version. This study compared the applicability of the YOLO models for algal image detection from a practical aspect in terms of classification accuracy and inference time. Therefore, automated algal cell detection models were developed using YOLO v3 and YOLO v4, in which a tiny version of each model was also applied. The cell images of 30 algal genera were used for training and testing the models. The model performances were compared using the mean average precision (mAP). The mAP values of the four models were 40.9, 88.8, 84.4, and 89.8 for YOLO v3, YOLO v3-tiny, YOLO v4, and YOLO v4-tiny, respectively, demonstrating that YOLO v4 is more precise than YOLO v3. The tiny version models presented noticeably higher model accuracy than the standard models, allowing up to ten times faster object detection time. These results demonstrate the practical advantage of tiny version models for the application of object detection with a limited number of object classes.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Wu, Yonglin, Dongxu Gao, Yinfeng Fang, Xue Xu, Hongwei Gao, and Zhaojie Ju. "SDE-YOLO: A Novel Method for Blood Cell Detection." Biomimetics 8, no. 5 (September 1, 2023): 404. http://dx.doi.org/10.3390/biomimetics8050404.

Повний текст джерела
Анотація:
This paper proposes an improved target detection algorithm, SDE-YOLO, based on the YOLOv5s framework, to address the low detection accuracy, misdetection, and leakage in blood cell detection caused by existing single-stage and two-stage detection algorithms. Initially, the Swin Transformer is integrated into the back-end of the backbone to extract the features in a better way. Then, the 32 × 32 network layer in the path-aggregation network (PANet) is removed to decrease the number of parameters in the network while increasing its accuracy in detecting small targets. Moreover, PANet substitutes traditional convolution with depth-separable convolution to accurately recognize small targets while maintaining a fast speed. Finally, replacing the complete intersection over union (CIOU) loss function with the Euclidean intersection over union (EIOU) loss function can help address the imbalance of positive and negative samples and speed up the convergence rate. The SDE-YOLO algorithm achieves a mAP of 99.5%, 95.3%, and 93.3% on the BCCD blood cell dataset for white blood cells, red blood cells, and platelets, respectively, which is an improvement over other single-stage and two-stage algorithms such as SSD, YOLOv4, and YOLOv5s. The experiment yields excellent results, and the algorithm detects blood cells very well. The SDE-YOLO algorithm also has advantages in accuracy and real-time blood cell detection performance compared to the YOLOv7 and YOLOv8 technologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

S, Mohanapriya, Mohana Saranya S, Kumaravel T, and Sumithra P. "Image Detection and Segmentation using YOLO v5 for surveillance." Applied and Computational Engineering 8, no. 1 (August 1, 2023): 160–65. http://dx.doi.org/10.54254/2755-2721/8/20230109.

Повний текст джерела
Анотація:
Segmentation an advancement of object detection where bounding boxes are placed around object in object detection whereas segmentation is used to classify every pixel in the given image. In Deep Learning, Yolov5 algorithm can be used to perform segmentation on the given data. Using YOLOv5 algorithm objects are detected and classified by surrounding the objects with the bounding boxes. Compared to the existing algorithms for segmentation, YOLOv5 algorithm has improved time complexity and accuracy. In this paper YOLOv5 algorithm is compared with the existing CNN algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Chen, Wei, Jingfeng Zhang, Biyu Guo, Qingyu Wei, and Zhiyu Zhu. "An Apple Detection Method Based on Des-YOLO v4 Algorithm for Harvesting Robots in Complex Environment." Mathematical Problems in Engineering 2021 (October 21, 2021): 1–12. http://dx.doi.org/10.1155/2021/7351470.

Повний текст джерела
Анотація:
Real-time detection of apples in natural environment is a necessary condition for robots to pick apples automatically, and it is also a key technique for orchard yield prediction and fine management. To make the harvesting robots detect apples quickly and accurately in complex environment, a Des-YOLO v4 algorithm and a detection method of apples are proposed. Compared with the current mainstream detection algorithms, YOLO v4 has better detection performance. However, the complex network structure of YOLO v4 will reduce the picking efficiency of the robot. Therefore, a Des-YOLO structure is proposed, which reduces network parameters and improves the detection speed of the algorithm. In the training phase, the imbalance of positive and negative samples will cause false detection of apples. To solve the above problem, a class loss function based on AP-Loss (Average Precision Loss) is proposed to improve the accuracy of apple recognition. Traditional YOLO algorithm uses NMS (Nonmaximum Suppression) method to filter the prediction boxes, but NMS cannot detect the adjacent apples when they overlap each other. Therefore, Soft-NMS is used instead of NMS to solve the problem of missing detection, so as to improve the generalization of the algorithm. The proposed algorithm is tested on the self-made apple image data set. The results show that Des-YOLO v4 network has ideal features with a mAP (mean Average Precision) of apple detection of 97.13%, a recall rate of 90%, and a detection speed of 51 f/s. Compared with traditional network models such as YOLO v4 and Faster R-CNN, the Des-YOLO v4 can meet the accuracy and speed requirements of apple detection at the same time. Finally, the self-designed apple-harvesting robot is used to carry out the harvesting experiment. The experiment shows that the harvesting time is 8.7 seconds and the successful harvesting rate of the robot is 92.9%. Therefore, the proposed apple detection method has the advantages of higher recognition accuracy and faster recognition speed. It can provide new solutions for apple-harvesting robots and new ideas for smart agriculture.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Xu, Wenliang, Wei Wang, Jianhua Ren, Chaozhi Cai, and Yingfang Xue. "A Novel Object Detection Method of Pointer Meter Based on Improved YOLOv4-Tiny." Applied Sciences 13, no. 6 (March 16, 2023): 3822. http://dx.doi.org/10.3390/app13063822.

Повний текст джерела
Анотація:
Pointer meters have been widely used in industrial field due to their strong stability; it is an important issue to be able to accurately read the meter. At present, patrol robots with computer vision function are often used to detect and read meters in some situations that are not suitable for manual reading of the meter. However, existing object detection algorithms are often misread and miss detection due to factors such as lighting, shooting angles, and complex background environments. To address these problems, this paper designs a YOLOv4-Tiny-based pointer meter detection model named pointer meter detection-YOLO (PMD-YOLO) for the goal of practical applications. Firstly, to reduce weight of the model and ensure the accuracy of object detection, a feature extraction network named GhostNet with a channel attention mechanism is implemented in YOLOv4-Tiny. Then, to enhance feature extraction ability of small- and medium-sized targets, an improved receptive field block (RFB) module is added after the backbone network, and a convolutional block attention module (CBAM) is introduced into the feature pyramid network (FPN). Finally, the FPN is optimized to improve the feature utilization, which further improves the detection accuracy. In order to verify the effectiveness and superiority of the PMD-YOLO proposed in this paper, the PMD-YOLO is used for experimental research on the constructed dataset of the pointer meter, and the target detection algorithms such as Faster region convolutional neural network (RCNN), YOLOv4, YOLOv4-Tiny, and YOLOv5-s are compared under the same conditions. The experimental results show that the mean average precision of the PMD-YOLO is 97.82%, which is significantly higher than the above algorithms. The weight of the PMD-YOLO is 9.38 M, which is significantly lower than the above algorithms. Therefore, the PMD-YOLO not only has high detection accuracy, but can also reduce the weight of the model and can meet the requirements of practical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Zhang, Ying, Xuyang Hou, and Xuhang Hou. "Combining Self-Supervised Learning and Yolo v4 Network for Construction Vehicle Detection." Mobile Information Systems 2022 (September 20, 2022): 1–10. http://dx.doi.org/10.1155/2022/9056415.

Повний текст джерела
Анотація:
At present, there are many application fields of target detection, but it is very difficult to apply intelligent traffic target detection in the construction site because of the complex environment and many kinds of engineering vehicles. A method based on self-supervised learning combined with the Yolo (you only look once) v4 network defined as “SSL-Yolo v4” (self-supervised learning-Yolo v4) is proposed for the detection of construction vehicles. Based on the combination of self-supervised learning network and Yolo v4 algorithm network, a self-supervised learning method based on context rotation is introduced. By using this method, the problem that a large number of manual data annotations are needed in the training of existing deep learning algorithms is solved. Furthermore, the self-supervised learning network after training is combined with Yolo v4 network to improve the prediction ability, robustness, and detection accuracy of the model. The performance of the proposed model is optimized by performing five-fold cross validation on the self-built dataset, and the effectiveness of the algorithm is verified. The simulation results show that the average detection accuracy of the SSL-Yolo v4 method combined with self-supervised learning is 92.91%, 4.83% detection speed is improved, 7–8 fps detection speed is improved, and 8–9% recall rate is improved. The results show that the method has higher precision and speed and improves the ability of target prediction and the robustness of engineering vehicle detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Li, Chao, Rui Xu, Yong Lv, Yonghui Zhao, and Weipeng Jing. "Edge Real-Time Object Detection and DPU-Based Hardware Implementation for Optical Remote Sensing Images." Remote Sensing 15, no. 16 (August 10, 2023): 3975. http://dx.doi.org/10.3390/rs15163975.

Повний текст джерела
Анотація:
The accuracy of current deep learning algorithms has certainly increased. However, deploying deep learning networks on edge devices with limited resources is challenging due to their inherent depth and high parameter count. Here, we proposed an improved YOLO model based on an attention mechanism and receptive field (RFA-YOLO) model, applying the MobileNeXt network as the backbone to reduce parameters and complexity, adopting the Receptive Field Block (RFB) and Efficient Channel Attention (ECA) modules to improve the detection accuracy of multi-scale and small objects. Meanwhile, an FPGA-based model deployment solution was proposed to implement parallel acceleration and low-power deployment of the detection algorithm model, which achieved real-time object detection for optical remote sensing images. We implement the proposed DPU and Vitis AI-based object detection algorithms with FPGA deployment to achieve low power consumption and real-time performance requirements. Experimental results on DIOR dataset demonstrate the effectiveness and superiority of our RFA-YOLO model for object detection algorithms. Moreover, to evaluate the performance of the proposed hardware implementation, it was implemented on a Xilinx ZCU104 board. Results of the experiments for hardware and software simulation show that our DPU-based hardware implementation are more power efficient than central processing units (CPUs) and graphics processing units (GPUs), and have the potential to be applied to onboard processing systems with limited resources and power consumption.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Jang, Goeun, Wooseong Yeo, Meeyoung Park, and Yong-Gyun Park. "RT-CLAD: Artificial Intelligence-Based Real-Time Chironomid Larva Detection in Drinking Water Treatment Plants." Sensors 24, no. 1 (December 28, 2023): 177. http://dx.doi.org/10.3390/s24010177.

Повний текст джерела
Анотація:
The presence of chironomid larvae in tap water has sparked public concern regarding the water supply system in South Korea. Despite ongoing efforts to establish a safe water supply system, entirely preventing larval occurrences remains a significant challenge. Therefore, we developed a real-time chironomid larva detection system (RT-CLAD) based on deep learning technology, which was implemented in drinking water treatment plants. The acquisition of larval images was facilitated by a multi-spectral camera with a wide spectral range, enabling the capture of unique wavelet bands associated with larvae. Three state-of-the-art deep learning algorithms, namely the convolutional neural network (CNN), you only look once (YOLO), and residual neural network (ResNet), renowned for their exceptional performance in object detection tasks, were employed. Following a comparative analysis of these algorithms, the most accurate and rapid model was selected for RT-CLAD. To achieve the efficient and accurate detection of larvae, the original images were transformed into a specific wavelet format, followed by preprocessing to minimize data size. Consequently, the CNN, YOLO, and ResNet algorithms successfully detected larvae with 100% accuracy. In comparison to YOLO and ResNet, the CNN algorithm demonstrated greater efficiency because of its faster processing and simpler architecture. We anticipate that our RT-CLAD will address larva detection challenges in water treatment plants, thereby enhancing water supply security.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Rahma, Lusiana, Hadi Syaputra, A. Haidar Mirza, and Susan Dian Purnamasari. "Objek Deteksi Makanan Khas Palembang Menggunakan Algoritma YOLO (You Only Look Once)." Jurnal Nasional Ilmu Komputer 2, no. 3 (November 20, 2021): 213–32. http://dx.doi.org/10.47747/jurnalnik.v2i3.534.

Повний текст джерела
Анотація:
Deep learning is a part of machine learning method that uses artificial neural network (ANN). The type of learning in deep learning can be supervised, semi-supervised, and unsupervised [7] . CNN & RNN (Supervised) and RBM & Autoencoder (Unsupervised) are deep learning algorithms. All of the above algorithms have uses in their respective fields, depending on what we want to use them for. One of the most frequently used cases for deep learning is object detection and classification. The Convolutional Neural Network (CNN) algorithm is the most widely used algorithm for object detection cases, one of the reasons because it is supported by Google's Tensorflow framework, but it turns out that there is one object detection algorithm that has a higher level of accuracy and processing speed, namely You Only Look Once (YOLO) which can run on 2 frameworks (Darknet & Darkflow) and is supported by GPU. That's why here the author prefers to do object detection with the You Only Look Once (YOLO) method. The research data with the title Palembang Food Detection Object Using the YOLO (You Only Look Once) Algorithm is a sample photo of food from Google Image. There are 31 types of Palembang specialties, each type consists of approximately 50 to 70 images, so the total images used are from 31 types of Palembang foods, namely 1955 images with jpeg format for training data, and 31 images with jpeg format typical Palembang foods for test data.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Adibhatla, Venkat Anil, Huan-Chuang Chih, Chi-Chang Hsu, Joseph Cheng, Maysam F. Abbod, and Jiann-Shing Shieh. "Defect Detection in Printed Circuit Boards Using You-Only-Look-Once Convolutional Neural Networks." Electronics 9, no. 9 (September 22, 2020): 1547. http://dx.doi.org/10.3390/electronics9091547.

Повний текст джерела
Анотація:
In this study, a deep learning algorithm based on the you-only-look-once (YOLO) approach is proposed for the quality inspection of printed circuit boards (PCBs). The high accuracy and efficiency of deep learning algorithms has resulted in their increased adoption in every field. Similarly, accurate detection of defects in PCBs by using deep learning algorithms, such as convolutional neural networks (CNNs), has garnered considerable attention. In the proposed method, highly skilled quality inspection engineers first use an interface to record and label defective PCBs. The data are then used to train a YOLO/CNN model to detect defects in PCBs. In this study, 11,000 images and a network of 24 convolutional layers and 2 fully connected layers were used. The proposed model achieved a defect detection accuracy of 98.79% in PCBs with a batch size of 32.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Chen, Henghuai, and Jiansheng Guan. "Teacher–Student Behavior Recognition in Classroom Teaching Based on Improved YOLO-v4 and Internet of Things Technology." Electronics 11, no. 23 (December 2, 2022): 3998. http://dx.doi.org/10.3390/electronics11233998.

Повний текст джерела
Анотація:
Based on the classroom teaching scenarios, an improved YOLO-v4 behavior detection algorithm is proposed to recognize the behaviors of teachers and students. With the development of CNN (Convolutional Neural Networks) and IoT (Internet of Things) technologies, target detection algorithms based on deep learning have become mainstream, and typical algorithms such as SSD (Single Shot Detection) and YOLO series have emerged. Based on the videos or images collected in the perception layer of the IoT paradigm, deep learning models are used in the processing layer to implement various intelligent applications. However, none of these deep learning-based algorithms are perfect, and there is room for improvement in terms of detection accuracy, computing speed, and multi-target detection capabilities. In this paper, by introducing the concept of cross-stage local network, embedded connection (EC) components are constructed and embedded at the end of the YOLO-v4 network to obtain an improved YOLO-v4 network. Aiming at the problem that it is difficult to quickly and effectively identify the students’ actions when they are occluded, the Repulsion loss function is connected in series on the basis of the original YOLO-v4 loss function. The newly added loss function consists of two parts: RepGT loss and RepBox loss. The RepGT loss function is used to calculate the loss values between the target prediction box and the adjacent ground truth boxes to reduce false positive detection results; the RepBox loss function is used to calculate the loss value between the target prediction box and other adjacent target prediction boxes to reduce false negative detection results. The training and testing are carried out on the classroom behavior datasets of teachers and students, respectively. The experimental results show that the average precision of identifying various classroom behaviors of different targets exceeds 90%, which verifies the effectiveness of the proposed method. The model performs well in sustainable classroom behavior recognition in educational context, accurate recognition of classroom behaviors can help teachers and students better understand classroom learning and promote the development of intelligent classroom model.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kong, Yueping, and Zhiyuan Shen. "Microorganism Detection in Activated Sludge Microscopic Images Using Improved YOLO." Applied Sciences 13, no. 22 (November 16, 2023): 12406. http://dx.doi.org/10.3390/app132212406.

Повний текст джерела
Анотація:
Wastewater has detrimental effects on the natural environment. The activated sludge method, a widely adopted approach for wastewater treatment, has proven highly effective. Within this process, microorganisms play a pivotal role, necessitating continuous monitoring of their quantity and diversity. Conventional methods, such as microscopic observation, are time-consuming. With the widespread integration of computer vision technologies into object detection, deep learning-based object detection algorithms, notably the You Only Look Once (YOLO) model, have garnered substantial interest for their speed and precision in detection tasks. In this research, we applied the YOLO model to detect microorganisms in microscopic images of activated sludge. Furthermore, addressing the irregular shapes of microorganisms, we developed an improved YOLO model by incorporating deformable convolutional networks and an attention mechanism to enhance its detection capabilities. We conducted training and testing using a custom dataset comprising five distinct objects. The performance evaluations used in this study utilized metrics such as the mean average precision at intersections over a union threshold of 0.5 (mAP@0.5), with the improved YOLO model achieving a mAP@0.5 value of 93.7%, signifying a 4.3% improvement over the YOLOv5 model. Comparative analysis of the improved YOLO model and other object detection algorithms on the same dataset revealed a higher accuracy for the improved YOLO model. These results demonstrate the superior performance of the improved YOLO model in the task of detecting microorganisms in activated sludge, providing an effective auxiliary method for wastewater treatment monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wu, Xuemei, and Wenhua Li. "Algorithm Improvements based on the Attention Mechanism of yolo-v5." Frontiers in Science and Engineering 2, no. 6 (June 22, 2022): 8–12. http://dx.doi.org/10.54691/fse.v2i6.965.

Повний текст джерела
Анотація:
In order to further improve the accuracy of mouse species recognition, a yolo-v5 algorithm based on attention mechanism is proposed. The algorithm consists of two parts, one is the yolo-v5 backbone network, and the second part is to add attention mechanisms to the backbone network. It is used to extract global features, and then uses the attention mechanism to give different weights to the extracted features, and finally achieves the purpose of obtaining the features we need. Finally, the softmax classifier is used for classification. Experiments were validated on a homemade mouse dataset. Classification accuracy of 85% and above, respectively. Compared with other algorithms, this algorithm has a good recognition effect and robustness. Rat categories can be identified more accurately.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Pagare, Shreyas, and Rakesh Kumar. "Object Detection Algorithms Compression CNN, YOLO and SSD." International Journal of Computer Applications 185, no. 7 (May 18, 2023): 34–38. http://dx.doi.org/10.5120/ijca2023922726.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Niu, Chengwen, Yunsheng Song, and Xinyue Zhao. "SE-Lightweight YOLO: Higher Accuracy in YOLO Detection for Vehicle Inspection." Applied Sciences 13, no. 24 (December 7, 2023): 13052. http://dx.doi.org/10.3390/app132413052.

Повний текст джерела
Анотація:
Against the backdrop of ongoing urbanization, issues such as traffic congestion and accidents are assuming heightened prominence, necessitating urgent and practical interventions to enhance the efficiency and safety of transportation systems. A paramount challenge lies in realizing real-time vehicle monitoring, flow management, and traffic safety control within the transportation infrastructure to mitigate congestion, optimize road utilization, and curb traffic accidents. In response to this challenge, the present study leverages advanced computer vision technology for vehicle detection and tracking, employing deep learning algorithms. The resultant recognition outcomes provide the traffic management domain with actionable insights for optimizing traffic flow management and signal light control through real-time data analysis. The study demonstrates the applicability of the SE-Lightweight YOLO algorithm, as presented herein, showcasing a noteworthy 95.7% accuracy in vehicle recognition. As a prospective trajectory, this research stands poised to serve as a pivotal reference for urban traffic management, laying the groundwork for a more efficient, secure, and streamlined transportation system in the future. To solve the existing vehicle detection problems in vehicle type recognition, recognition and detection accuracy need to be improved, alongside resolving the issues of slow detection speed, and others. In this paper, we made innovative changes based on the YOLOv7 framework: we added the SE attention transfer mechanism in the backbone module, and the model achieved better results, with a 1.2% improvement compared with the original YOLOv7. Meanwhile, we replaced the SPPCSPC module with the SPPFCSPC module, which enhanced the trait extraction of the model. After that, we applied the SE-Lightweight YOLO to the field of traffic monitoring. This can assist transportation-related personnel in traffic monitoring and aid in creating big data on transportation. Therefore, this research has a good application prospect.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Li, Jinrui, Libin Chen, Jian Shen, Xiongwu Xiao, Xiaosong Liu, Xin Sun, Xiao Wang, and Deren Li. "Improved Neural Network with Spatial Pyramid Pooling and Online Datasets Preprocessing for Underwater Target Detection Based on Side Scan Sonar Imagery." Remote Sensing 15, no. 2 (January 11, 2023): 440. http://dx.doi.org/10.3390/rs15020440.

Повний текст джерела
Анотація:
Fast and high-accuracy detection of underwater targets based on side scan sonar images has great potential for marine fisheries, underwater security, marine mapping, underwater engineering and other applications. The following problems, however, must be addressed when using low-resolution side scan sonar images for underwater target detection: (1) the detection performance is limited due to the restriction on the input of multi-scale images; (2) the widely used deep learning algorithms have a low detection effect due to their complex convolution layer structures; (3) the detection performance is limited due to insufficient model complexity; and (4) the number of samples is not enough because of the dataset preprocessing methods. To solve these problems, an improved neural network for underwater target detection—which is based on side scan sonar images and fully utilizes spatial pyramid pooling and online dataset preprocessing based on the You Look Only Once version three (YOLO V3) algorithm—is proposed. The methodology of the proposed approach is as follows: (1) the AlexNet, GoogleNet, VGGNet and the ResNet networks and an adopted YOLO V3 algorithm were the backbone networks. The structure of the YOLO V3 model is more mature and compact and has higher target detection accuracy and better detection efficiency than the other models; (2) spatial pyramid pooling was added at the end of the convolution layer to improve detection performance. Spatial pyramid pooling breaks the scale restrictions when inputting images to improve feature extraction because spatial pyramid pooling enables the backbone network to learn faster at high accuracy; and (3) online dataset preprocessing based on YOLO V3 with spatial pyramid pooling increases the number of samples and improves the complexity of the model to further improve detection process performance. Three-side scan imagery datasets were used for training and were tested in experiments. The quantitative evaluation using Accuracy, Recall, Precision, mAP and F1-Score metrics indicates that: for the AlexNet, GoogleNet, VGGNet and ResNet algorithms, when spatial pyramid pooling is added to their backbone networks, the average detection accuracy of the three sets of data was improved by 2%, 4%, 2% and 2%, respectively, as compared to their original formulations. Compared with the original YOLO V3 model, the proposed ODP+YOLO V3+SPP underwater target detection algorithm model has improved detection performance through the mAP qualitative evaluation index has increased by 6%, the Precision qualitative evaluation index has increased by 13%, and the detection efficiency has increased by 9.34%. These demonstrate that adding spatial pyramid pooling and online dataset preprocessing can improve the target detection accuracy of these commonly used algorithms. The proposed, improved neural network with spatial pyramid pooling and online dataset preprocessing based on the YOLO V3 method achieves the highest scores for underwater target detection results for sunken ships, fish flocks and seafloor topography, with mAP scores of 98%, 91% and 96% for the above three kinds of datasets, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Majumder, Mishuk, and Chester Wilmot. "Automated Vehicle Counting from Pre-Recorded Video Using You Only Look Once (YOLO) Object Detection Model." Journal of Imaging 9, no. 7 (June 27, 2023): 131. http://dx.doi.org/10.3390/jimaging9070131.

Повний текст джерела
Анотація:
Different techniques are being applied for automated vehicle counting from video footage, which is a significant subject of interest to many researchers. In this context, the You Only Look Once (YOLO) object detection model, which has been developed recently, has emerged as a promising tool. In terms of accuracy and flexible interval counting, the adequacy of existing research on employing the model for vehicle counting from video footage is unlikely sufficient. The present study endeavors to develop computer algorithms for automated traffic counting from pre-recorded videos using the YOLO model with flexible interval counting. The study involves the development of algorithms aimed at detecting, tracking, and counting vehicles from pre-recorded videos. The YOLO model was applied in TensorFlow API with the assistance of OpenCV. The developed algorithms implement the YOLO model for counting vehicles in two-way directions in an efficient way. The accuracy of the automated counting was evaluated compared to the manual counts, and was found to be about 90 percent. The accuracy comparison also shows that the error of automated counting consistently occurs due to undercounting from unsuitable videos. In addition, a benefit–cost (B/C) analysis shows that implementing the automated counting method returns 1.76 times the investment.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Oruganti, Rakesh, and Namratha P. "Cascading Deep Learning Approach for Identifying Facial Expression YOLO Method." ECS Transactions 107, no. 1 (April 24, 2022): 16649–58. http://dx.doi.org/10.1149/10701.16649ecst.

Повний текст джерела
Анотація:
Face detection is one of the biggest tasks to find things. Identification is usually the first stage of facial recognition. and identity verification. In recent years, in-depth learning algorithms have changed dramatically in object acquisition. These algorithms can usually be divided into two groups, namely two-phase machines like Faster R-CNN or single-phase machines like YOLO. While YOLO and its variants are less accurate than the two-phase detection systems, they outperform other components with wider genes. When faced with standard-sized objects, YOLO works well, but can't get smaller objects. A face recognition system that uses AI (Artificial Intelligence) separates or verifies a person's identity by analyzing their faces. In this project, a single neural network predicts binding boxes and class opportunities directly from the full images in a single test.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Liu, Yuqing, Huiyong Chu, Liming Song, Zhonglin Zhang, Xing Wei, Ming Chen, and Jieran Shen. "An Improved Tuna-YOLO Model Based on YOLO v3 for Real-Time Tuna Detection Considering Lightweight Deployment." Journal of Marine Science and Engineering 11, no. 3 (March 2, 2023): 542. http://dx.doi.org/10.3390/jmse11030542.

Повний текст джерела
Анотація:
A real-time tuna detection network on mobile devices is a common tool for accurate tuna catch statistics. However, most object detection models have multiple parameters, and normal mobile devices have difficulties in satisfying real-time detection. Based on YOLOv3, this paper proposes a Tuna-YOLO, which is a lightweight object detection network for mobile devices. Firstly, following a comparison of the performance of various lightweight backbone networks, the MobileNet v3 was used as a backbone structure to reduce the number of parameters and calculations. Secondly, the SENET module was replaced with a CBAM attention module to further improve the feature extraction ability of tuna. Then, the knowledge distillation was used to make the Tuna-YOLO detect more accurate. We created a small dataset by deframing electronic surveillance video of fishing boats and labeled the data. After data annotation on the dataset, the K-means algorithm was used to get nine better anchor boxes on the basis of label information, which was used to improve the detection precision. In addition, we compared the detection performance of the Tuna-YOLO and three versions of YOLO v5-6.1 s/m/l after image enhancement. The results show that the Tuna-YOLO reduces the parameters of YOLOv3 from 234.74 MB to 88.45 MB, increases detection precision from 93.33% to 95.83%, and increases the calculation speed from 10.12 fps to 15.23 fps. The performance of the Tuna-YOLO is better than three versions of YOLO v5-6.1 s/m/l. Tuna-YOLO provides a basis for subsequent deployment of algorithms to mobile devices and real-time catch statistics.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Patel, Krishna, Chintan Bhatt, and Pier Luigi Mazzeo. "Deep Learning-Based Automatic Detection of Ships: An Experimental Study Using Satellite Images." Journal of Imaging 8, no. 7 (June 28, 2022): 182. http://dx.doi.org/10.3390/jimaging8070182.

Повний текст джерела
Анотація:
The remote sensing surveillance of maritime areas represents an essential task for both security and environmental reasons. Recently, learning strategies belonging to the field of machine learning (ML) have become a niche of interest for the community of remote sensing. Specifically, a major challenge is the automatic classification of ships from satellite imagery, which is needed for traffic surveillance systems, the protection of illegal fisheries, control systems of oil discharge, and the monitoring of sea pollution. Deep learning (DL) is a branch of ML that has emerged in the last few years as a result of advancements in digital technology and data availability. DL has shown capacity and efficacy in tackling difficult learning tasks that were previously intractable. Specifically, DL methods, such as convolutional neural networks (CNNs), have been reported to be efficient in image detection and recognition applications. In this paper, we focused on the development of an automatic ship detection (ASD) approach by using DL methods for assessing the Airbus ship dataset (composed of about 40 K satellite images). The paper explores and analyzes the distinct variations of the YOLO algorithm for the detection of ships from satellite images. A comparison of different versions of YOLO algorithms for ship detection, such as YOLOv3, YOLOv4, and YOLOv5, is presented, after training them on a personal computer with a large dataset of satellite images of the Airbus Ship Challenge and Shipsnet. The differences between the algorithms could be observed on the personal computer. We have confirmed that these algorithms can be used for effective ship detection from satellite images. The conclusion drawn from the conducted research is that the YOLOv5 object detection algorithm outperforms the other versions of the YOLO algorithm, i.e., YOLOv4 and YOLOv3 in terms accuracy of 99% for YOLOv5 compared to 98% and 97% respectively for YOLOv4 and YOLOv3.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Chen, Yuntao, Bin Wu, guangzhi Luo, xiaoyan Chen, and junlin Liu. "Multi-target tracking algorithm based on YOLO+DeepSORT." Journal of Physics: Conference Series 2414, no. 1 (December 1, 2022): 012018. http://dx.doi.org/10.1088/1742-6596/2414/1/012018.

Повний текст джерела
Анотація:
Abstract After several years of development, the multi-target tracking algorithm has significantly transitioned from being researched to being put into practical production and life. The application field of human detection and tracking technology is closely related to our daily life. In order to solve the problems of the background complexity, the diversity of object shapes in the application of multi-target algorithms, and the mutual occlusion between multiple tracking targets and the lost target, this paper improves the DeepSORT target tracking algorithm, uses the improved YOLO network to detect pedestrians, inputs the detection frame to the Kalman filter for prediction output, and then uses the Hungarian algorithm to realize a tracking frame and detection frame of the predicted output. The experimental results show that target tracking accuracy is increased by 4.3%, the running time is the shortest, and the number of successfully tracked targets is relatively high.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії