Статті в журналах з теми "YOLO method"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: YOLO method.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "YOLO method".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wu, Wentong, Han Liu, Lingling Li, Yilin Long, Xiaodong Wang, Zhuohua Wang, Jinglun Li, and Yi Chang. "Application of local fully Convolutional Neural Network combined with YOLO v5 algorithm in small target detection of remote sensing image." PLOS ONE 16, no. 10 (October 29, 2021): e0259283. http://dx.doi.org/10.1371/journal.pone.0259283.

Повний текст джерела
Анотація:
This exploration primarily aims to jointly apply the local FCN (fully convolution neural network) and YOLO-v5 (You Only Look Once-v5) to the detection of small targets in remote sensing images. Firstly, the application effects of R-CNN (Region-Convolutional Neural Network), FRCN (Fast Region-Convolutional Neural Network), and R-FCN (Region-Based-Fully Convolutional Network) in image feature extraction are analyzed after introducing the relevant region proposal network. Secondly, YOLO-v5 algorithm is established on the basis of YOLO algorithm. Besides, the multi-scale anchor mechanism of Faster R-CNN is utilized to improve the detection ability of YOLO-v5 algorithm for small targets in the image in the process of image detection, and realize the high adaptability of YOLO-v5 algorithm to different sizes of images. Finally, the proposed detection method YOLO-v5 algorithm + R-FCN is compared with other algorithms in NWPU VHR-10 data set and Vaihingen data set. The experimental results show that the YOLO-v5 + R-FCN detection method has the optimal detection ability among many algorithms, especially for small targets in remote sensing images such as tennis courts, vehicles, and storage tanks. Moreover, the YOLO-v5 + R-FCN detection method can achieve high recall rates for different types of small targets. Furthermore, due to the deeper network architecture, the YOL v5 + R-FCN detection method has a stronger ability to extract the characteristics of image targets in the detection of remote sensing images. Meanwhile, it can achieve more accurate feature recognition and detection performance for the densely arranged target images in remote sensing images. This research can provide reference for the application of remote sensing technology in China, and promote the application of satellites for target detection tasks in related fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Liu, Chunsheng, Yu Guo, Shuang Li, and Faliang Chang. "ACF Based Region Proposal Extraction for YOLOv3 Network Towards High-Performance Cyclist Detection in High Resolution Images." Sensors 19, no. 12 (June 13, 2019): 2671. http://dx.doi.org/10.3390/s19122671.

Повний текст джерела
Анотація:
You Only Look Once (YOLO) deep network can detect objects quickly with high precision and has been successfully applied in many detection problems. The main shortcoming of YOLO network is that YOLO network usually cannot achieve high precision when dealing with small-size object detection in high resolution images. To overcome this problem, we propose an effective region proposal extraction method for YOLO network to constitute an entire detection structure named ACF-PR-YOLO, and take the cyclist detection problem to show our methods. Instead of directly using the generated region proposals for classification or regression like most region proposal methods do, we generate large-size potential regions containing objects for the following deep network. The proposed ACF-PR-YOLO structure includes three main parts. Firstly, a region proposal extraction method based on aggregated channel feature (ACF) is proposed, called ACF based region proposal (ACF-PR) method. In ACF-PR, ACF is firstly utilized to fast extract candidates and then a bounding boxes merging and extending method is designed to merge the bounding boxes into correct region proposals for the following YOLO net. Secondly, we design suitable YOLO net for fine detection in the region proposals generated by ACF-PR. Lastly, we design a post-processing step, in which the results of YOLO net are mapped into the original image outputting the detection and localization results. Experiments performed on the Tsinghua-Daimler Cyclist Benchmark with high resolution images and complex scenes show that the proposed method outperforms the other tested representative detection methods in average precision, and that it outperforms YOLOv3 by 13.69 % average precision and outperforms SSD by 25.27 % average precision.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huang, Shan, Ye He, and Xiao-an Chen. "M-YOLO: A Nighttime Vehicle Detection Method Combining Mobilenet v2 and YOLO v3." Journal of Physics: Conference Series 1883, no. 1 (April 1, 2021): 012094. http://dx.doi.org/10.1088/1742-6596/1883/1/012094.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Li, Xun, Yao Liu, Zhengfan Zhao, Yue Zhang, and Li He. "A Deep Learning Approach of Vehicle Multitarget Detection from Traffic Video." Journal of Advanced Transportation 2018 (November 4, 2018): 1–11. http://dx.doi.org/10.1155/2018/7075814.

Повний текст джерела
Анотація:
Vehicle detection is expected to be robust and efficient in various scenes. We propose a multivehicle detection method, which consists of YOLO under the Darknet framework. We also improve the YOLO-voc structure according to the change of the target scene and traffic flow. The classification training model is obtained based on ImageNet and the parameters are fine-tuned according to the training results and the vehicle characteristics. Finally, we obtain an effective YOLO-vocRV network for road vehicles detection. In order to verify the performance of our method, the experiment is carried out on different vehicle flow states and compared with the classical YOLO-voc, YOLO 9000, and YOLO v3. The experimental results show that our method achieves the detection rate of 98.6% in free flow state, 97.8% in synchronous flow state, and 96.3% in blocking flow state, respectively. In addition, our proposed method has less false detection rate than previous works and shows good robustness.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Daniels, Steve, Nanik Suciati, and Chastine Fathichah. "Indonesian Sign Language Recognition using YOLO Method." IOP Conference Series: Materials Science and Engineering 1077, no. 1 (February 1, 2021): 012029. http://dx.doi.org/10.1088/1757-899x/1077/1/012029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Wei, Jingfeng Zhang, Biyu Guo, Qingyu Wei, and Zhiyu Zhu. "An Apple Detection Method Based on Des-YOLO v4 Algorithm for Harvesting Robots in Complex Environment." Mathematical Problems in Engineering 2021 (October 21, 2021): 1–12. http://dx.doi.org/10.1155/2021/7351470.

Повний текст джерела
Анотація:
Real-time detection of apples in natural environment is a necessary condition for robots to pick apples automatically, and it is also a key technique for orchard yield prediction and fine management. To make the harvesting robots detect apples quickly and accurately in complex environment, a Des-YOLO v4 algorithm and a detection method of apples are proposed. Compared with the current mainstream detection algorithms, YOLO v4 has better detection performance. However, the complex network structure of YOLO v4 will reduce the picking efficiency of the robot. Therefore, a Des-YOLO structure is proposed, which reduces network parameters and improves the detection speed of the algorithm. In the training phase, the imbalance of positive and negative samples will cause false detection of apples. To solve the above problem, a class loss function based on AP-Loss (Average Precision Loss) is proposed to improve the accuracy of apple recognition. Traditional YOLO algorithm uses NMS (Nonmaximum Suppression) method to filter the prediction boxes, but NMS cannot detect the adjacent apples when they overlap each other. Therefore, Soft-NMS is used instead of NMS to solve the problem of missing detection, so as to improve the generalization of the algorithm. The proposed algorithm is tested on the self-made apple image data set. The results show that Des-YOLO v4 network has ideal features with a mAP (mean Average Precision) of apple detection of 97.13%, a recall rate of 90%, and a detection speed of 51 f/s. Compared with traditional network models such as YOLO v4 and Faster R-CNN, the Des-YOLO v4 can meet the accuracy and speed requirements of apple detection at the same time. Finally, the self-designed apple-harvesting robot is used to carry out the harvesting experiment. The experiment shows that the harvesting time is 8.7 seconds and the successful harvesting rate of the robot is 92.9%. Therefore, the proposed apple detection method has the advantages of higher recognition accuracy and faster recognition speed. It can provide new solutions for apple-harvesting robots and new ideas for smart agriculture.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Oruganti, Rakesh, and Namratha P. "Cascading Deep Learning Approach for Identifying Facial Expression YOLO Method." ECS Transactions 107, no. 1 (April 24, 2022): 16649–58. http://dx.doi.org/10.1149/10701.16649ecst.

Повний текст джерела
Анотація:
Face detection is one of the biggest tasks to find things. Identification is usually the first stage of facial recognition. and identity verification. In recent years, in-depth learning algorithms have changed dramatically in object acquisition. These algorithms can usually be divided into two groups, namely two-phase machines like Faster R-CNN or single-phase machines like YOLO. While YOLO and its variants are less accurate than the two-phase detection systems, they outperform other components with wider genes. When faced with standard-sized objects, YOLO works well, but can't get smaller objects. A face recognition system that uses AI (Artificial Intelligence) separates or verifies a person's identity by analyzing their faces. In this project, a single neural network predicts binding boxes and class opportunities directly from the full images in a single test.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

He, Guowen, Wenlong Wang, Bowen Shi, Shijie Liu, Hui Xiang, and Xiaoyuan Wang. "An Improved YOLO v4 Algorithm-based Object Detection Method for Maritime Vessels." International Journal of Science and Engineering Applications 11, no. 04 (April 2022): 50–55. http://dx.doi.org/10.7753/ijsea1104.1001.

Повний текст джерела
Анотація:
Ship object detection is the core part of the maritime intelligent ship safety assistance technology, which plays a crucial role in ship safety. The object detection algorithm based on the convolutional neural network has greatly improved the accuracy and speed of object detection, which YOLO algorithm stands out among the object detection algorithms with more excellent robustness, detection accuracy, and real-time performance. Based on the YOLO v4 algorithm, this study uses the k-means algorithm to improve clustering at the input side of image data and introduces relevant berth data in the self-organized dataset to achieve detection of ships and berths for the lack of detection of berths in the existing ship detection algorithm. The experimental results show that the mAP and F1-score of the improved YOLO v4 are increased by 2.79% and 0.80%, respectively. The improved YOLO v4 algorithm effectively improves the accuracy of ship object detection, and the in-port berth also achieves better detection results and improves the ship environment perception, which is important in assisting berthing and unberthing.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Ying, Jianbo Wu, Hui Deng, and Xianghui Zeng. "Food Image Recognition and Food Safety Detection Method Based on Deep Learning." Computational Intelligence and Neuroscience 2021 (December 16, 2021): 1–13. http://dx.doi.org/10.1155/2021/1268453.

Повний текст джерела
Анотація:
With the development of machine learning, as a branch of machine learning, deep learning has been applied in many fields such as image recognition, image segmentation, video segmentation, and so on. In recent years, deep learning has also been gradually applied to food recognition. However, in the field of food recognition, the degree of complexity is high, the situation is complex, and the accuracy and speed of recognition are worrying. This paper tries to solve the above problems and proposes a food image recognition method based on neural network. Combining Tiny-YOLO and twin network, this method proposes a two-stage learning mode of YOLO-SIMM and designs two versions of YOLO-SiamV1 and YOLO-SiamV2. Through experiments, this method has a general recognition accuracy. However, there is no need for manual marking, and it has a good development prospect in practical popularization and application. In addition, a method for foreign body detection and recognition in food is proposed. This method can effectively separate foreign body from food by threshold segmentation technology. Experimental results show that this method can effectively distinguish desiccant from foreign matter and achieve the desired effect.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Huang, Zhijian, Fangmin Li, Xidao Luan, and Zuowei Cai. "A Weakly Supervised Method for Mud Detection in Ores Based on Deep Active Learning." Mathematical Problems in Engineering 2020 (May 30, 2020): 1–10. http://dx.doi.org/10.1155/2020/3510313.

Повний текст джерела
Анотація:
Automatically detecting mud in bauxite ores is important and valuable, with which we can improve productivity and reduce pollution. However, distinguishing mud and ores in a real scene is challenging for their similarity in shape, color, and texture. Moreover, training a deep learning model needs a large amount of exactly labeled samples, which is expensive and time consuming. Aiming at the challenging problem, this paper proposed a novel weakly supervised method based on deep active learning (AL), named YOLO-AL. The method uses the YOLO-v3 model as the basic detector, which is initialized with the pretrained weights on the MS COCO dataset. Then, an AL framework-embedded YOLO-v3 model is constructed. In the AL process, it iteratively fine-tunes the last few layers of the YOLO-v3 model with the most valuable samples, which is selected by a Less Confident (LC) strategy. Experimental results show that the proposed method can effectively detect mud in ores. More importantly, the proposed method can obviously reduce the labeled samples without decreasing the detection accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kang, Seongju, Jaegi Hwang, and Kwangsue Chung. "Domain-Specific On-Device Object Detection Method." Entropy 24, no. 1 (January 1, 2022): 77. http://dx.doi.org/10.3390/e24010077.

Повний текст джерела
Анотація:
Object detection is a significant activity in computer vision, and various approaches have been proposed to detect varied objects using deep neural networks (DNNs). However, because DNNs are computation-intensive, it is difficult to apply them to resource-constrained devices. Here, we propose an on-device object detection method using domain-specific models. In the proposed method, we define object of interest (OOI) groups that contain objects with a high frequency of appearance in specific domains. Compared with the existing DNN model, the layers of the domain-specific models are shallower and narrower, reducing the number of trainable parameters; thus, speeding up the object detection. To ensure a lightweight network design, we combine various network structures to obtain the best-performing lightweight detection model. The experimental results reveal that the size of the proposed lightweight model is 21.7 MB, which is 91.35% and 36.98% smaller than those of YOLOv3-SPP and Tiny-YOLO, respectively. The f-measure achieved on the MS COCO 2017 dataset were 18.3%, 11.9% and 20.3% higher than those of YOLOv3-SPP, Tiny-YOLO and YOLO-Nano, respectively. The results demonstrated that the lightweight model achieved higher efficiency and better performance on non-GPU devices, such as mobile devices and embedded boards, than conventional models.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

K., Gayathri, and Thangavelu S. "Novel deep learning model for vehicle and pothole detection." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 3 (September 1, 2021): 1576. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1576-1582.

Повний текст джерела
Анотація:
The most important aspect of automatic driving and traffic surveillance is vehicle detection. In addition, poor road conditions caused by potholes are the cause of traffic accidents and vehicle damage. The proposed work uses deep learning models. The proposed method can detect vehicles and potholes using images. The faster region-based convolutional neural network (CNN) and the inception network V2 model are used to implement the model. The proposed work compares the performance, accuracy numbers, detection time, and advantages and disadvantages of the faster region-based convolution neural network (Faster R-CNN) with single shot detector (SSD) and you only look once (YOLO) algorithms. The proposed method shows good progress than the existing methods such as SSD and YOLO. The measure of performance evaluation is Accuracy. The proposed method shows an improvement of 5% once compared with the previous methods such as SSD and YOLO.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Cao Chengshuo, 曹城硕, and 袁杰 Yuan Jie. "Mask-Wearing Detection Method Based on YOLO-Mask." Laser & Optoelectronics Progress 58, no. 8 (2021): 0810019. http://dx.doi.org/10.3788/lop202158.0810019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Sun, Zhongzhen, Xiangguang Leng, Yu Lei, Boli Xiong, Kefeng Ji, and Gangyao Kuang. "BiFA-YOLO: A Novel YOLO-Based Method for Arbitrary-Oriented Ship Detection in High-Resolution SAR Images." Remote Sensing 13, no. 21 (October 20, 2021): 4209. http://dx.doi.org/10.3390/rs13214209.

Повний текст джерела
Анотація:
Due to its great application value in the military and civilian fields, ship detection in synthetic aperture radar (SAR) images has always attracted much attention. However, ship targets in High-Resolution (HR) SAR images show the significant characteristics of multi-scale, arbitrary directions and dense arrangement, posing enormous challenges to detect ships quickly and accurately. To address these issues above, a novel YOLO-based arbitrary-oriented SAR ship detector using bi-directional feature fusion and angular classification (BiFA-YOLO) is proposed in this article. First of all, a novel bi-directional feature fusion module (Bi-DFFM) tailored to SAR ship detection is applied to the YOLO framework. This module can efficiently aggregate multi-scale features through bi-directional (top-down and bottom-up) information interaction, which is helpful for detecting multi-scale ships. Secondly, to effectively detect arbitrary-oriented and densely arranged ships in HR SAR images, we add an angular classification structure to the head network. This structure is conducive to accurately obtaining ships’ angle information without the problem of boundary discontinuity and complicated parameter regression. Meanwhile, in BiFA-YOLO, a random rotation mosaic data augmentation method is employed to suppress the impact of angle imbalance. Compared with other conventional data augmentation methods, the proposed method can better improve detection performance of arbitrary-oriented ships. Finally, we conduct extensive experiments on the SAR ship detection dataset (SSDD) and large-scene HR SAR images from GF-3 satellite to verify our method. The proposed method can reach the detection performance with precision = 94.85%, recall = 93.97%, average precision = 93.90%, and F1-score = 0.9441 on SSDD. The detection speed of our method is approximately 13.3 ms per 512 × 512 image. In addition, comparison experiments with other deep learning-based methods and verification experiments on large-scene HR SAR images demonstrate that our method shows strong robustness and adaptability.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Hu, Jianming, Xiyang Zhi, Tianjun Shi, Wei Zhang, Yang Cui, and Shenggang Zhao. "PAG-YOLO: A Portable Attention-Guided YOLO Network for Small Ship Detection." Remote Sensing 13, no. 16 (August 4, 2021): 3059. http://dx.doi.org/10.3390/rs13163059.

Повний текст джерела
Анотація:
The YOLO network has been extensively employed in the field of ship detection in optical images. However, the YOLO model rarely considers the global and local relationships in the input image, which limits the final target prediction performance to a certain extent, especially for small ship targets. To address this problem, we propose a novel small ship detection method, which improves the detection accuracy compared with the YOLO-based network architecture and does not increase the amount of computation significantly. Specifically, attention mechanisms in spatial and channel dimensions are proposed to adaptively assign the importance of features in different scales. Moreover, in order to improve the training efficiency and detection accuracy, a new loss function is employed to constrain the detection step, which enables the detector to learn the shape of the ship target more efficiently. The experimental results on a public and high-quality ship dataset indicate that our method realizes state-of-the-art performance in comparison with several widely used advanced approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lee, Tae-hee, Young-seok Park, Young-mo Kim, and Doo-hyun Choi. "A Method of Counting Vehicle with High Accuracy Using YOLO v3." Transaction of the Korean Society of Automotive Engineers 29, no. 3 (March 1, 2021): 283–88. http://dx.doi.org/10.7467/ksae.2021.29.3.283.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Liu, Jiayi, Xingfei Zhu, Xingyu Zhou, Shanhua Qian, and Jinghu Yu. "Defect Detection for Metal Base of TO-Can Packaged Laser Diode Based on Improved YOLO Algorithm." Electronics 11, no. 10 (May 13, 2022): 1561. http://dx.doi.org/10.3390/electronics11101561.

Повний текст джерела
Анотація:
Defect detection is an important part of the manufacturing process of mechanical products. In order to detect the appearance defects quickly and accurately, a method of defect detection for the metal base of TO-can packaged laser diode (metal TO-base) based on the improved You Only Look Once (YOLO) algorithm named YOLO-SO is proposed in this study. Firstly, convolutional block attention mechanism (CBAM) module was added to the convolutional layer of the backbone network. Then, a random-paste-mosaic (RPM) small object data augmentation module was proposed on the basis of Mosaic algorithm in YOLO-V5. Finally, the K-means++ clustering algorithm was applied to reduce the sensitivity to the initial clustering center, making the positioning more accurate and reducing the network loss. The proposed YOLO-SO model was compared with other object detection algorithms such as YOLO-V3, YOLO-V4, and Faster R-CNN. Experimental results demonstrated that the YOLO-SO model reaches 84.0% mAP, 5.5% higher than the original YOLO-V5 algorithm. Moreover, the YOLO-SO model had clear advantages in terms of the smallest weight size and detection speed of 25 FPS. These advantages make the YOLO-SO model more suitable for the real-time detection of metal TO-base appearance defects.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Du, Luyao, Xiongjie Chen, Zhonghui Pei, Donghua Zhang, Bo Liu, and Wei Chen. "Improved Real-Time Traffic Obstacle Detection and Classification Method Applied in Intelligent and Connected Vehicles in Mixed Traffic Environment." Journal of Advanced Transportation 2022 (April 7, 2022): 1–12. http://dx.doi.org/10.1155/2022/2259113.

Повний текст джерела
Анотація:
Mixed traffic is a common phenomenon in urban environment. For the mixed traffic situation, the detection of traffic obstacles, including motor vehicle, non-motor vehicle, and pedestrian, is an essential task for intelligent and connected vehicles (ICVs). In this paper, an improved YOLO model is proposed for traffic obstacle detection and classification. The YOLO network is used to accurately detect the traffic obstacles, while the Wasserstein distance-based loss is used to improve the misclassification in the detection that may cause serious consequences. A new established dataset containing four types of traffic obstacles including vehicles, bikes, riders, and pedestrians is collected under different time periods and different weather conditions in urban environment in Wuhan, China. Experiments are performed on the established dataset on Windows PC and NVIDIA TX2, respectively. From the experimental results, the improved YOLO model has higher mean average precision than the original YOLO model and can effectively reduce intolerable misclassifications. In addition, the improved YOLOv4-tiny model has a detection speed of 22.5928 fps on NVIDIA TX2, which can basically realize the real-time detection of traffic obstacles.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kim, Munhyeong, Jongmin Jeong, and Sungho Kim. "ECAP-YOLO: Efficient Channel Attention Pyramid YOLO for Small Object Detection in Aerial Image." Remote Sensing 13, no. 23 (November 29, 2021): 4851. http://dx.doi.org/10.3390/rs13234851.

Повний текст джерела
Анотація:
Detection of small targets in aerial images is still a difficult problem due to the low resolution and background-like targets. With the recent development of object detection technology, efficient and high-performance detector techniques have been developed. Among them, the YOLO series is a representative method of object detection that is light and has good performance. In this paper, we propose a method to improve the performance of small target detection in aerial images by modifying YOLOv5. The backbone is was modified by applying the first efficient channel attention module, and the channel attention pyramid method was proposed. We propose an efficient channel attention pyramid YOLO (ECAP-YOLO). Second, in order to optimize the detection of small objects, we eliminated the module for detecting large objects and added a detect layer to find smaller objects, reducing the computing power used for detecting small targets and improving the detection rate. Finally, we use transposed convolution instead of upsampling. Comparing the method proposed in this paper to the original YOLOv5, the performance improvement for the mAP was 6.9% when using the VEDAI dataset, 5.4% when detecting small cars in the xView dataset, 2.7% when detecting small vehicle and small ship classes from the DOTA dataset, and approximately 2.4% when finding small cars in the Arirang dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Yin, Jun, Huadong Pan, Hui Su, Zhonggeng Liu, and Zhirong Peng. "A Fast Orientation Invariant Detector Based on the One-stage Method." MATEC Web of Conferences 232 (2018): 04036. http://dx.doi.org/10.1051/matecconf/201823204036.

Повний текст джерела
Анотація:
We propose an object detection method that predicts the orientation bounding boxes (OBB) to estimate objects locations, scales and orientations based on YOLO (You Only Look Once), which is one of the top detection algorithms performing well both in accuracy and speed. Horizontal bounding boxes(HBB), which are not robust to orientation variances, are used in the existing object detection methods to detect targets. The proposed orientation invariant YOLO (OIYOLO) detector can effectively deal with the bird’s eye viewpoint images where the orientation angles of the objects are arbitrary. In order to estimate the rotated angle of objects, we design a new angle loss function. Therefore, the training of OIYOLO forces the network to learn the annotated orientation angle of objects, making OIYOLO orientation invariances. The proposed approach that predicts OBB can be applied in other detection frameworks. In additional, to evaluate the proposed OIYOLO detector, we create an UAV-DAHUA datasets that annotated with objects locations, scales and orientation angles accurately. Extensive experiments conducted on UAV-DAHUA and DOTA datasets demonstrate that OIYOLO achieves state-of-the-art detection performance with high efficiency comparing with the baseline YOLO algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chen, Ziwen, Lijie Cao, and Qihua Wang. "YOLOv5-Based Vehicle Detection Method for High-Resolution UAV Images." Mobile Information Systems 2022 (May 2, 2022): 1–11. http://dx.doi.org/10.1155/2022/1828848.

Повний текст джерела
Анотація:
To solve the feature loss caused by the compression of high-resolution images during the normalization stage, an adaptive clipping algorithm based on the You Only Look Once (YOLO) object detection algorithm is proposed for the data preprocessing and detection stage. First, a high-resolution training dataset is augmented with the adaptive clipping algorithm. Then, a new training set is generated to retain the detailed features that the object detection network needs to learn. During the network detection process, the image is detected in chunks via the adaptive clipping algorithm, and the coordinates of the detection results are merged by position mapping. Finally, the chunked detection results are collocated with the global detection results and outputted. The improved YOLO algorithm is used to conduct experiments comparing this algorithm with the original algorithm for the detection of test set vehicles. The experimental results show that compared with the original YOLO object detection algorithm, the precision of our algorithm is increased from 79.5% to 91.9%, the recall is increased from 44.2% to 82.5%, and the mAP@0.5 is increased from 47.9% to 89.6%. The application of the adaptive clipping algorithm in the vehicle detection process effectively improves the performance of the traditional object detection algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Cho, Sooyoung, Sang Geun Choi, Daeyeol Kim, Gyunghak Lee, and Chae BongSohn. "How to Generate Image Dataset based on 3D Model and Deep Learning Method." International Journal of Engineering & Technology 7, no. 3.34 (September 1, 2018): 221. http://dx.doi.org/10.14419/ijet.v7i3.34.18969.

Повний текст джерела
Анотація:
Performances of computer vision tasks have been drastically improved after applying deep learning. Such object recognition, object segmentation, object tracking, and others have been approached to the super-human level. Most of the algorithms were trained by using supervised learning. In general, the performance of computer vision is improved by increasing the size of the data. The collected data was labeled and used as a data set of the YOLO algorithm. In this paper, we propose a data set generation method using Unity which is one of the 3D engines. The proposed method makes it easy to obtain the data necessary for learning. We classify 2D polymorphic objects and test them against various data using a deep learning model. In the classification using CNN and VGG-16, 90% accuracy was achieved. And we used Tiny-YOLO of YOLO algorithm for object recognition and we achieved 78% accuracy. Finally, we compared in terms of virtual and real environments it showed a result of 97 to 99 percent for each accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Guo Jinxiang, 郭进祥, 刘立波 Liu Libo, 徐峰 Xu Feng, and 郑斌 Zheng Bin. "Airport Scene Aircraft Detection Method Based on YOLO v3." Laser & Optoelectronics Progress 56, no. 19 (2019): 191003. http://dx.doi.org/10.3788/lop56.191003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Zhang, Jingfeng, Yuanwei Hu, and Shujun Ji. "Angle steel tower bolt defect detection based on YOLO-V3." ITM Web of Conferences 45 (2022): 01013. http://dx.doi.org/10.1051/itmconf/20224501013.

Повний текст джерела
Анотація:
The bolts in the angle steel tower are seriously affected by corrosion and loss. This paper proposes a novel detection system based on YOLO-V3 to avoid the danger of traditional manual detection method for the bolt fault detection of the angle steel tower. A multi-scale convolution module is used to replace the ordinary convolution of original YOLO-V3 so as to obtain the spatial characteristics information of different scales in the image, and enhance the detection accuracy. The experimental results show that mAP of the proposed YOLO-SKIP network is 0.91. Our YOLO-SKIP model has achieved the best detection performance on the defective angle steel tower bolt data.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Ye, Jing, Zhaoyu Yuan, Cheng Qian, and Xiaoqiong Li. "CAA-YOLO: Combined-Attention-Augmented YOLO for Infrared Ocean Ships Detection." Sensors 22, no. 10 (May 16, 2022): 3782. http://dx.doi.org/10.3390/s22103782.

Повний текст джерела
Анотація:
Infrared ocean ships detection still faces great challenges due to the low signal-to-noise ratio and low spatial resolution resulting in a severe lack of texture details for small infrared targets, as well as the distribution of the extremely multiscale ships. In this paper, we propose a CAA-YOLO to alleviate the problems. In this study, to highlight and preserve features of small targets, we apply a high-resolution feature layer (P2) to better use shallow details and the location information. In order to suppress the shallow noise of the P2 layer and further enhance the feature extraction capability, we introduce a TA module into the backbone. Moreover, we design a new feature fusion method to capture the long-range contextual information of small targets and propose a combined attention mechanism to enhance the ability of the feature fusion while suppressing the noise interference caused by the shallow feature layers. We conduct a detailed study of the algorithm based on a marine infrared dataset to verify the effectiveness of our algorithm, in which the AP and AR of small targets increase by 5.63% and 9.01%, respectively, and the mAP increases by 3.4% compared to that of YOLOv5.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Park, Jungsu, Jiwon Baek, Jongrack Kim, Kwangtae You, and Keugtae Kim. "Deep Learning-Based Algal Detection Model Development Considering Field Application." Water 14, no. 8 (April 14, 2022): 1275. http://dx.doi.org/10.3390/w14081275.

Повний текст джерела
Анотація:
Algal blooms have various effects on drinking water supply systems; thus, proper monitoring is essential. Traditional visual identification using a microscope is a time-consuming method and requires extensive labor. Recently, advanced machine learning algorithms have been increasingly applied for the development of object detection models. The You-Only-Look-Once (YOLO) model is a novel machine learning algorithm used for object detection; it has been continuously improved in newer versions, and a tiny version of each standard model presented. The tiny versions applied a less complicated architecture using a smaller number of convolutional layers to enable faster object detection than the standard version. This study compared the applicability of the YOLO models for algal image detection from a practical aspect in terms of classification accuracy and inference time. Therefore, automated algal cell detection models were developed using YOLO v3 and YOLO v4, in which a tiny version of each model was also applied. The cell images of 30 algal genera were used for training and testing the models. The model performances were compared using the mean average precision (mAP). The mAP values of the four models were 40.9, 88.8, 84.4, and 89.8 for YOLO v3, YOLO v3-tiny, YOLO v4, and YOLO v4-tiny, respectively, demonstrating that YOLO v4 is more precise than YOLO v3. The tiny version models presented noticeably higher model accuracy than the standard models, allowing up to ten times faster object detection time. These results demonstrate the practical advantage of tiny version models for the application of object detection with a limited number of object classes.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Lin, Shinfeng D., Tingyu Chang, and Wensheng Chen. "Multiple Object Tracking using YOLO-based Detector." Journal of Imaging Science and Technology 65, no. 4 (July 1, 2021): 40401–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2021.65.4.040401.

Повний текст джерела
Анотація:
Abstract In computer vision, multiple object tracking (MOT) plays a crucial role in solving many important issues. A common approach of MOT is tracking by detection. Tracking by detection includes occlusions, motion prediction, and object re-identification. From the video frames, a set of detections is extracted for leading the tracking process. These detections are usually associated together for assigning the same identifications to bounding boxes holding the same target. In this article, MOT using YOLO-based detector is proposed. The authors’ method includes object detection, bounding box regression, and bounding box association. First, the YOLOv3 is exploited to be an object detector. The bounding box regression and association is then utilized to forecast the object’s position. To justify their method, two open object tracking benchmarks, 2D MOT2015 and MOT16, were used. Experimental results demonstrate that our method is comparable to several state-of-the-art tracking methods, especially in the impressive results of MOT accuracy and correctly identified detections.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Li Chengyue, 李成跃, 姚剑敏 Yao Jianmin, 林志贤 Lin Zhixian, 严群 Yan Qun, and 范保青 Fan Baoqing. "Object Detection Method Based on Improved YOLO Lightweight Network." Laser & Optoelectronics Progress 57, no. 14 (2020): 141003. http://dx.doi.org/10.3788/lop57.141003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yin, Yunhua, Huifang Li, and Wei Fu. "Faster-YOLO: An accurate and faster object detection method." Digital Signal Processing 102 (July 2020): 102756. http://dx.doi.org/10.1016/j.dsp.2020.102756.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Sari, Filiz, and Ali Burak Ulas. "Deep Learning Application in Detecting Glass Defects with Color Space Conversion and Adaptive Histogram Equalization." Traitement du Signal 39, no. 2 (April 30, 2022): 731–36. http://dx.doi.org/10.18280/ts.390238.

Повний текст джерела
Анотація:
Manually detecting defects on the surfaces of glass products is a slow and time-consuming process in the quality control process, so computer-aided systems, including image processing and machine learning techniques are used to overcome this problem. In this study, scratch and bubble defects of the jar, photographed in the studio with a white matte background and a -60° peak angle, are investigated with the Yolo-V3 deep learning technique. Obtained performance is 94.65% for the raw data. Color space conversion (CSC) techniques, HSV and CIE-Lab Luv, are applied to the resulting images. V channels select for preprocessing. While the HSV method decreases the performance, an increase has been observed in the CIE-Lab Luv method. With the CIE-Lab Luv method, to which is applied the adaptive histogram equalization, the maximum recall, precision, and F1-score reach above 97%. Also, Yolo-V3 compared with the Faster R-CNN, it is observed that Yolo-V3 gave better results in all analyzes, and the highest overall accuracy is achieved in both methods when adaptive histogram equalization is applied to CIE-Lab Luv.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Liu, Tao, Bo Pang, Shangmao Ai, and Xiaoqiang Sun. "Study on Visual Detection Algorithm of Sea Surface Targets Based on Improved YOLOv3." Sensors 20, no. 24 (December 18, 2020): 7263. http://dx.doi.org/10.3390/s20247263.

Повний текст джерела
Анотація:
Countries around the world have paid increasing attention to the issue of marine security, and sea target detection is a key task to ensure marine safety. Therefore, it is of great significance to propose an efficient and accurate sea-surface target detection algorithm. The anchor-setting method of the traditional YOLO v3 only uses the degree of overlap between the anchor and the ground-truth box as the standard. As a result, the information of some feature maps cannot be used, and the required accuracy of target detection is hard to achieve in a complex sea environment. Therefore, two new anchor-setting methods for the visual detection of sea targets were proposed in this paper: the average method and the select-all method. In addition, cross PANet, a feature fusion structure for cross-feature maps was developed and was used to obtain a better baseline cross YOLO v3, where different anchor-setting methods were combined with a focal loss for experimental comparison in the datasets of sea buoys and existing sea ships, SeaBuoys and SeaShips, respectively. The results showed that the method proposed in this paper could significantly improve the accuracy of YOLO v3 in detecting sea-surface targets, and the highest value of mAP in the two datasets is 98.37% and 90.58%, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Yu, Liya, Zheng Wang, and Zhongjing Duan. "Detecting Gear Surface Defects Using Background-Weakening Method and Convolutional Neural Network." Journal of Sensors 2019 (November 19, 2019): 1–13. http://dx.doi.org/10.1155/2019/3140980.

Повний текст джерела
Анотація:
A novel, efficient, and accurate method to detect gear defects under a complex background during industrial gear production is proposed in this study. Firstly, we first analyzed image filtering and smoothing techniques, which we used as a basis to develop a complex background-weakening algorithm for detecting the microdefects of gears. Subsequently, we discussed the types and characteristics of gear manufacturing defects. Under the complex background of image acquisition, a new model S-YOLO is proposed for online detection of gear defects, and it was validated on our experimental platform for online gear defect detection under a complex background. Results show that S-YOLO has better recognition of microdefects under a complex background than the YOLOv3 target recognition network. The proposed algorithm has good robustness as well. Code and data have been made available.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Fang, Yiming, Xianxin Guo, Kun Chen, Zhu Zhou, and Qing Ye. "Accurate and automated detection of surface knots on sawn timbers using YOLO-V5 model." BioResources 16, no. 3 (June 10, 2021): 5390–406. http://dx.doi.org/10.15376/biores.16.3.5390-5406.

Повний текст джерела
Анотація:
Knot detection is a challenging problem for the wood industry. Traditional methodologies depend heavily on the features selected manually and therefore were not always accurate due to the variety of knot appearances. This paper proposes an automated framework for addressing the aforementioned problem by using the state-of-the-art YOLO-v5 (the fifth version of You Only Look Once) detector. The features of surface knots were learned and extracted adaptively, and then the knot defects were identified accurately even though the knots vary in terms of color and texture. The proposed method was compared with YOLO-v3 SPP and Faster R-CNN on two datasets. Experimental results demonstrated that YOLO-v5 model achieved the best performance for detecting surface knot defects. F-Score on Dataset 1 was 91.7% and that of Dataset 2 was up to 97.7%. Moreover, YOLO-v5 has clear advantages in terms of training speed and the size of the weight file. These advantages made YOLO-v5 more suitable for the detection of surface knots on sawn timbers and potential for timber grading.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Li, Yuanhong, Zuoxi Zhao, Yangfan Luo, and Zhi Qiu. "Real-Time Pattern-Recognition of GPR Images with YOLO v3 Implemented by Tensorflow." Sensors 20, no. 22 (November 12, 2020): 6476. http://dx.doi.org/10.3390/s20226476.

Повний текст джерела
Анотація:
Artificial intelligence (AI) is widely used in pattern recognition and positioning. In most of the geological exploration applications, it needs to locate and identify underground objects according to electromagnetic wave characteristics from the ground-penetrating radar (GPR) images. Currently, a few robust AI approach can detect targets by real-time with high precision or automation for GPR images recognition. This paper proposes an approach that can be used to identify parabolic targets with different sizes and underground soil or concrete structure voids based on you only look once (YOLO) v3. With the TensorFlow 1.13.0 developed by Google, we construct YOLO v3 neural network to realize real-time pattern recognition of GPR images. We propose the specific coding method for the GPR image samples in Yolo V3 to improve the prediction accuracy of bounding boxes. At the same time, K-means algorithm is also applied to select anchor boxes to improve the accuracy of positioning hyperbolic vertex. For some instances electromagnetic-vacillated signals may occur, which refers to multiple parabolic electromagnetic waves formed by strong conductive objects among soils or overlapping waveforms. This paper deals with the vacillating signal similarity intersection over union (IoU) (V-IoU) methods. Experimental result shows that the V-IoU combined with non-maximum suppression (NMS) can accurately frame targets in GPR image and reduce the misidentified boxes as well. Compared with the single shot multi-box detector (SSD), YOLO v2, and Faster-RCNN, the V-IoU YOLO v3 shows its superior performance even when implemented by CPU. It can meet the real-time output requirements by an average 12 fps detected speed. In summary, this paper proposes a simple and high-precision real-time pattern recognition method for GPR imagery, and promoted the application of artificial intelligence or deep learning in the field of the geophysical science.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Lu, Junyan, Chi Ma, Li Li, Xiaoyan Xing, Yong Zhang, Zhigang Wang, and Jiuwei Xu. "A Vehicle Detection Method for Aerial Image Based on YOLO." Journal of Computer and Communications 06, no. 11 (2018): 98–107. http://dx.doi.org/10.4236/jcc.2018.611009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Xianbao, Cheng, Qiu Guihua, Jiang Yu, and Zhu Zhaomin. "An improved small object detection method based on Yolo V3." Pattern Analysis and Applications 24, no. 3 (May 9, 2021): 1347–55. http://dx.doi.org/10.1007/s10044-021-00989-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wang, Xiqi, Shunyi Zheng, Ce Zhang, Rui Li, and Li Gui. "R-YOLO: A Real-Time Text Detector for Natural Scenes with Arbitrary Rotation." Sensors 21, no. 3 (January 28, 2021): 888. http://dx.doi.org/10.3390/s21030888.

Повний текст джерела
Анотація:
Accurate and efficient text detection in natural scenes is a fundamental yet challenging task in computer vision, especially when dealing with arbitrarily-oriented texts. Most contemporary text detection methods are designed to identify horizontal or approximately horizontal text, which cannot satisfy practical detection requirements for various real-world images such as image streams or videos. To address this lacuna, we propose a novel method called Rotational You Only Look Once (R-YOLO), a robust real-time convolutional neural network (CNN) model to detect arbitrarily-oriented texts in natural image scenes. First, a rotated anchor box with angle information is used as the text bounding box over various orientations. Second, features of various scales are extracted from the input image to determine the probability, confidence, and inclined bounding boxes of the text. Finally, Rotational Distance Intersection over Union Non-Maximum Suppression is used to eliminate redundancy and acquire detection results with the highest accuracy. Experiments on benchmark comparison are conducted upon four popular datasets, i.e., ICDAR2015, ICDAR2013, MSRA-TD500, and ICDAR2017-MLT. The results indicate that the proposed R-YOLO method significantly outperforms state-of-the-art methods in terms of detection efficiency while maintaining high accuracy; for example, the proposed R-YOLO method achieves an F-measure of 82.3% at 62.5 fps with 720 p resolution on the ICDAR2015 dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Di, Jie, and Qing Li. "A method of detecting apple leaf diseases based on improved convolutional neural network." PLOS ONE 17, no. 2 (February 1, 2022): e0262629. http://dx.doi.org/10.1371/journal.pone.0262629.

Повний текст джерела
Анотація:
Apple tree diseases have perplexed orchard farmers for several years. At present, numerous studies have investigated deep learning for fruit and vegetable crop disease detection. Because of the complexity and variety of apple leaf veins and the difficulty in judging similar diseases, a new target detection model of apple leaf diseases DF-Tiny-YOLO, based on deep learning, is proposed to realize faster and more effective automatic detection of apple leaf diseases. Four common apple leaf diseases, including 1,404 images, were selected for data modeling and method evaluation, and made three main improvements. Feature reuse was combined with the DenseNet densely connected network and further realized to reduce the disappearance of the deep gradient, thus strengthening feature propagation and improving detection accuracy. We introduced Resize and Re-organization (Reorg) and conducted convolution kernel compression to reduce the calculation parameters of the model, improve the operating detection speed, and allow feature stacking to achieve feature fusion. The network terminal uses convolution kernels of 1 × 1, 1 × 1, and 3 × 3, in turn, to realize the dimensionality reduction of features and increase network depth without increasing computational complexity, thus further improving the detection accuracy. The results showed that the mean average precision (mAP) and average intersection over union (IoU) of the DF-Tiny-YOLO model were 99.99% and 90.88%, respectively, and the detection speed reached 280 FPS. Compared with the Tiny-YOLO and YOLOv2 network models, the new method proposed in this paper significantly improves the detection performance. It can also detect apple leaf diseases quickly and effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Du, Shuangjiang, Baofu Zhang, Pin Zhang, Peng Xiang, and Hong Xue. "FA-YOLO: An Improved YOLO Model for Infrared Occlusion Object Detection under Confusing Background." Wireless Communications and Mobile Computing 2021 (November 20, 2021): 1–10. http://dx.doi.org/10.1155/2021/1896029.

Повний текст джерела
Анотація:
Infrared target detection is a popular applied field in object detection as well as a challenge. This paper proposes the focus and attention mechanism-based YOLO (FA-YOLO), which is an improved method to detect the infrared occluded vehicles in the complex background of remote sensing images. Firstly, we use GAN to create infrared images from the visible datasets to make sufficient datasets for training as well as using transfer learning. Then, to mitigate the impact of the useless and complex background information, we propose the negative sample focusing mechanism to focus on the confusing negative sample training to depress the false positives and increase the detection precision. Finally, to enhance the features of the infrared small targets, we add the dilated convolutional block attention module (dilated CBAM) to the CSPdarknet53 in the YOLOv4 backbone. To verify the superiority of our model, we carefully select 318 infrared occluded vehicle images from the VIVID-infrared dataset for testing. The detection accuracy-mAP improves from 79.24% to 92.95%, and the F1 score improves from 77.92% to 88.13%, which demonstrates a significant improvement in infrared small occluded vehicle detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Tang, Gang, Yichao Zhuge, Christophe Claramunt, and Shaoyang Men. "N-YOLO: A SAR Ship Detection Using Noise-Classifying and Complete-Target Extraction." Remote Sensing 13, no. 5 (February 26, 2021): 871. http://dx.doi.org/10.3390/rs13050871.

Повний текст джерела
Анотація:
High-resolution images provided by synthetic aperture radar (SAR) play an increasingly important role in the field of ship detection. Numerous algorithms have been so far proposed and relative competitive results have been achieved in detecting different targets. However, ship detection using SAR images is still challenging because these images are still affected by different degrees of noise while inshore ships are affected by shore image contrasts. To solve these problems, this paper introduces a ship detection method called N-YOLO, which based on You Only Look Once (YOLO). The N-YOLO includes a noise level classifier (NLC), a SAR target potential area extraction module (STPAE) and a YOLOv5-based detection module. First, NLC derives and classifies the noise level of SAR images. Secondly, the STPAE module is composed by a CA-CFAR and expansion operation, which is used to extract the complete region of potential targets. Thirdly, the YOLOv5-based detection module combines the potential target area with the original image to get a new image. To evaluate the effectiveness of the N-YOLO, experiments are conducted using a reference GaoFen-3 dataset. The detection results show that competitive performance has been achieved by N-YOLO in comparison with several CNN-based algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

ye, Famao, Rengao Zhang, Yuchi Xing, Junwei Xin, and Dajun Li. "Remote Sensing Image Retrieval Based on Key Region Detection." Journal of Physics: Conference Series 2216, no. 1 (March 1, 2022): 012110. http://dx.doi.org/10.1088/1742-6596/2216/1/012110.

Повний текст джерела
Анотація:
Abstract Remote sensing (RS) images usually describe large-scale natural geographical scenes with complex and rich background information, which will affect the retrieval performance of image features. How to reduce the background interference and improve the reliability of remote sensing image retrieval (RSIR) features is a problem that needs to be solved. In this paper, a RSIR method based on key region detection was proposed. Firstly, the ground objects of the image are extracted by a famous deep learning object detection model, a YOLO v5 model. Next, we extract the key region of the image according to these ground objects. Then, the image content in the key region is used to extract the retrieval feature by the convolutional neural networks (CNN) model, Resnet. Moreover, the weighted distance based on class probability is used to further improve retrieval performance. Our method utilizes the object detection capability of the YOLO model and the feature extraction capability of RESNET. Our method uses the target detection ability of the YOLO model and the feature extraction ability of RESNET to extract the retrieval feature of RS images. The experimental results on UCMD show that this method can improve the performance of RSIR.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Chen, Wen, Chengwei Ju, Yanzhou Li, Shanshan Hu, and Xi Qiao. "Sugarcane Stem Node Recognition in Field by Deep Learning Combining Data Expansion." Applied Sciences 11, no. 18 (September 17, 2021): 8663. http://dx.doi.org/10.3390/app11188663.

Повний текст джерела
Анотація:
The rapid and accurate identification of sugarcane stem nodes in the complex natural environment is essential for the development of intelligent sugarcane harvesters. However, traditional sugarcane stem node recognition has been mainly based on image processing and recognition technology, where the recognition accuracy is low in a complex natural environment. In this paper, an object detection algorithm based on deep learning was proposed for sugarcane stem node recognition in a complex natural environment, and the robustness and generalisation ability of the algorithm were improved by the dataset expansion method to simulate different illumination conditions. The impact of the data expansion and lighting condition in different time periods on the results of sugarcane stem nodes detection was discussed, and the superiority of YOLO v4, which performed best in the experiment, was verified by comparing it with four different deep learning algorithms, namely Faster R-CNN, SSD300, RetinaNet and YOLO v3. The comparison results showed that the AP (average precision) of the sugarcane stem nodes detected by YOLO v4 was 95.17%, which was higher than that of the other four algorithms (78.87%, 88.98%, 90.88% and 92.69%, respectively). Meanwhile, the detection speed of the YOLO v4 method was 69 f/s and exceeded the requirement of a real-time detection speed of 30 f/s. The research shows that it is a feasible method for real-time detection of sugarcane stem nodes in a complex natural environment. This research provides visual technical support for the development of intelligent sugarcane harvesters.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

ZHENG Xin, 郑欣, 田博 TIAN Bo, and 李晶晶 LI Jing-jing. "Intelligent recognition method of cervical cell cluster based on YOLO model." Chinese Journal of Liquid Crystals and Displays 33, no. 11 (2018): 965–71. http://dx.doi.org/10.3788/yjyxs20183311.0965.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Gao, Guohua, Shuangyou Wang, Ciyin Shuai, Zihua Zhang, Shuo Zhang, and Yongbing Feng. "Recognition and Detection of Greenhouse Tomatoes in Complex Environment." Traitement du Signal 39, no. 1 (February 28, 2022): 291–98. http://dx.doi.org/10.18280/ts.390130.

Повний текст джерела
Анотація:
In the complex environment of greenhouses, it is important to provide the picking robot with accurate information. For this purpose, this paper improves the recognition and detection method based on you only look once v5 (YOLO v5). Firstly, adding data enhancement boosts the network generalizability. On the input end, the k-means clustering (KMC) was utilized to obtain more suitable anchors, aiming to increase detection accuracy. Secondly, it enhanced multi-scale feature extraction by improving the spatial pyramid pooling (SPP). Finally, non-maximum suppression (NMS) was optimized to improve the accuracy of the network. Experimental results show that the improved YOLO v5 achieved a mean average precision (mAP) of 97.3%, a recall of 90.5%, and an F1-score of 92.0%, while the original YOLO v5 had a mAP of 95.9% and a recall of 85.6%; the improved YOLO v5 took 57ms to identify and detect each image. The recognition accuracy and speed of the improved YOLOv5 are much better than those of faster region-based convolutional neural network (Faster R-CNN) and YOLO v3. After that, the improved network was applied to identify and detect images take in unstructured environments with different illumination, branch/leave occlusions, and overlapping fruits. The results show that the improved network has a good robustness, providing stable and reliable information for the operation of tomato picking robots.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Son, Dong-Min, Yeong-Ah Yoon, Hyuk-Ju Kwon, Chang-Hyeon An, and Sung-Hak Lee. "Automatic Detection of Mandibular Fractures in Panoramic Radiographs Using Deep Learning." Diagnostics 11, no. 6 (May 22, 2021): 933. http://dx.doi.org/10.3390/diagnostics11060933.

Повний текст джерела
Анотація:
Mandibular fracture is one of the most frequent injuries in oral and maxillo-facial surgery. Radiologists diagnose mandibular fractures using panoramic radiography and cone-beam computed tomography (CBCT). Panoramic radiography is a conventional imaging modality, which is less complicated than CBCT. This paper proposes the diagnosis method of mandibular fractures in a panoramic radiograph based on a deep learning system without the intervention of radiologists. The deep learning system used has a one-stage detection called you only look once (YOLO). To improve detection accuracy, panoramic radiographs as input images are augmented using gamma modulation, multi-bounding boxes, single-scale luminance adaptation transform, and multi-scale luminance adaptation transform methods. Our results showed better detection performance than the conventional method using YOLO-based deep learning. Hence, it will be helpful for radiologists to double-check the diagnosis of mandibular fractures.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Skalkin, Anton M., and Yuliya V. Stroeva. "THE METHOD OF DATA FLOW PROCESSING INCOMING FROM AN IP-CAMERA." Автоматизация процессов управления 4, no. 66 (2021): 39–45. http://dx.doi.org/10.35752/1991-2927-2021-4-66-39-45.

Повний текст джерела
Анотація:
The article discusses methods of video stream analysis incoming from an IP-camera using the methods of image analysis, machine learning and knowledge engineering. For efficient data storage, a motion identification method was developed to determine the frame rate using a neural network (NN) classification of the block representation of the frame difference. In addition, the model of domain knowledge base was developed to analyze the events taking place in the image. The article proposes the method of event analysis based on logical conclusion where the YOLO neural network (NN) results are used as input data. The YOLO NN pops up entries in the image, such as a human being, table, TV set, etc. To confirm the module effectiveness a software system was designed comprising two modules. The first module is intended to test the effectiveness of the motion identification method and save the smallest number of frames necessary to determine the events taking place. The second module is developed to test the effectiveness of the method of analyzing occurring events resulted in the information on the state of controlled territories where video surveillance is performed. The input data for the second module are the saved frames obtained because of the operation of the first system module.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Ismail, Aya, Marwa Elpeltagy, Mervat S. Zaki, and Kamal Eldahshan. "A New Deep Learning-Based Methodology for Video Deepfake Detection Using XGBoost." Sensors 21, no. 16 (August 10, 2021): 5413. http://dx.doi.org/10.3390/s21165413.

Повний текст джерела
Анотація:
Currently, face-swapping deepfake techniques are widely spread, generating a significant number of highly realistic fake videos that threaten the privacy of people and countries. Due to their devastating impacts on the world, distinguishing between real and deepfake videos has become a fundamental issue. This paper presents a new deepfake detection method: you only look once–convolutional neural network–extreme gradient boosting (YOLO-CNN-XGBoost). The YOLO face detector is employed to extract the face area from video frames, while the InceptionResNetV2 CNN is utilized to extract features from these faces. These features are fed into the XGBoost that works as a recognizer on the top level of the CNN network. The proposed method achieves 90.62% of an area under the receiver operating characteristic curve (AUC), 90.73% accuracy, 93.53% specificity, 85.39% sensitivity, 85.39% recall, 87.36% precision, and 86.36% F1-measure on the CelebDF-FaceForencics++ (c23) merged dataset. The experimental study confirms the superiority of the presented method as compared to the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Chen, Haipeng, Zhentao He, Bowen Shi, and Tie Zhong. "Research on Recognition Method of Electrical Components Based on YOLO V3." IEEE Access 7 (2019): 157818–29. http://dx.doi.org/10.1109/access.2019.2950053.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Fang, Wei, Lin Wang, and Peiming Ren. "Tinier-YOLO: A Real-Time Object Detection Method for Constrained Environments." IEEE Access 8 (2020): 1935–44. http://dx.doi.org/10.1109/access.2019.2961959.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Oh, Kyeongmin, Yoseop Hong, Geongyeong Baek, and Chanjun Chun. "YOLO based Optical Music Recognition and Virtual Reality Content Creation Method." Korean Institute of Smart Media 10, no. 4 (December 31, 2021): 80–90. http://dx.doi.org/10.30693/smj.2021.10.4.80.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії