To see the other types of publications on this topic, follow the link: Machine vision; Object tracking.

Journal articles on the topic 'Machine vision; Object tracking'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Machine vision; Object tracking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Patil, Rupali, Adhish Velingkar, Mohammad Nomaan Parmar, Shubham Khandhar, and Bhavin Prajapati. "Machine Vision Enabled Bot for Object Tracking." JINAV: Journal of Information and Visualization 1, no. 1 (October 1, 2020): 15–26. http://dx.doi.org/10.35877/454ri.jinav155.

Full text
Abstract:
Object detection and tracking are essential and testing undertaking in numerous PC vision appliances. To distinguish the object first find a way to accumulate information. In this design, the robot can distinguish the item and track it just as it can turn left and right position and afterward push ahead and in reverse contingent on the object motion. It keeps up the consistent separation between the item and the robot. We have designed a webpage that is used to display a live feed from the camera and the camera can be controlled by the user efficiently. Implementation of machine learning is done for detection purposes along with open cv and creating cloud storage. The pan-tilt mechanism is used for camera control which is attached to our 3-wheel chassis robot through servo motors. This idea can be used for surveillance purposes, monitoring local stuff, and human-machine interaction.
APA, Harvard, Vancouver, ISO, and other styles
2

Llano, Christian R., Yuan Ren, and Nazrul I. Shaikh. "Object Detection and Tracking in Real Time Videos." International Journal of Information Systems in the Service Sector 11, no. 2 (April 2019): 1–17. http://dx.doi.org/10.4018/ijisss.2019040101.

Full text
Abstract:
Object and human tracking in streaming videos are one of the most challenging problems in vision computing. In this article, we review some relevant machine learning algorithms and techniques for human identification and tracking in videos. We provide details on metrics and methods used in the computer vision literature for monitoring and propose a state-space representation of the object tracking problem. A proof of concept implementation of the state-space based object tracking using particle filters is presented as well. The proposed approach enables tracking objects/humans in a video, including foreground/background separation for object movement detection.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xiao Jing, Chen Ming Sha, and Ya Jie Yue. "A Fast Object Tracking Approach in Vision Application." Applied Mechanics and Materials 513-517 (February 2014): 3265–68. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3265.

Full text
Abstract:
Object tracking has always been a hot issue in vision application, its application area include video surveillance, human-machine, virtual reality and so on. In this paper, we introduce the Mean shift tracking algorithm, which is a kind of important no parameters estimation method, then we evaluate the tracking performance of Mean shift algorithm on different video sequences.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Liyun. "Moving Object Detection Technology of Line Dancing Based on Machine Vision." Mobile Information Systems 2021 (April 26, 2021): 1–9. http://dx.doi.org/10.1155/2021/9995980.

Full text
Abstract:
In this paper, line dancing's moving object detection technology based on machine vision is studied to improve object detection. For this purpose, the improved frame difference for the background modeling technique is combined with the target detection algorithm. The moving target is extracted, and the postmorphological processing is carried out to make the target detection more accurate. Based on this, the tracking target is determined on the time axis of the moving target tracking stage, the position of the target in each frame is found, and the most similar target is found in each frame of the video sequence. The association relationship is established to determine a moving object template or feature. Through certain measurement criteria, the mean-shift algorithm is used to search the optimal candidate target in the image frame and carry out the corresponding matching to realize moving objects' tracking. This method can detect the moving targets of line dancing in various areas through the experimental analysis, which will not be affected by the position or distance, and always has a more accurate detection effect.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Yongqing, and Yanzhou Zhang. "OBJECT TRACKING BASED ON MACHINE VISION AND IMPROVED SVDD ALGORITHM." International Journal on Smart Sensing and Intelligent Systems 8, no. 1 (2015): 677–96. http://dx.doi.org/10.21307/ijssis-2017-778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jun, Mao. "Object Detection and Recognition Algorithm of Moving UAV Based on Machine Vision." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 7731–37. http://dx.doi.org/10.1166/jctn.2016.5770.

Full text
Abstract:
To conquer disadvantages of slow speed of target tracking algorithm in original distribution field as well as easiness of being caught in local optimal solution, one target tracking algorithm of real-time distribution field based on global matching is presented in the Thesis, thus remarkably improving performance of target tracking algorithm in distribution field. In proposed algorithm, relevant correlation coefficients will be used to substitute the similarity between target distribution filed of original L1 norm measurement and candidate distribution field. As a consequence, the target search process can concert from time domain operation to operation processing of frequency domain among which the latter has lower computation complexity and capability of global search of target position so as to conquer disadvantages such as randomness caused by sparse sampling and that the gradient descent of target tracking algorithm in original distribution field is liable to be caught in local optimal solution. In 12 challenging video sequences, compared with multiple-instance learning and tracking algorithm and tracking algorithm of original distribution field, the method proposed in the Thesis has acquired the optimum performance in tracking accuracy, success rate and speed.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Zheng, Cong Huang, Fei Zhong, Bote Qi, and Binghong Gao. "Posture Recognition and Behavior Tracking in Swimming Motion Images under Computer Machine Vision." Complexity 2021 (May 20, 2021): 1–9. http://dx.doi.org/10.1155/2021/5526831.

Full text
Abstract:
This study is to explore the gesture recognition and behavior tracking in swimming motion images under computer machine vision and to expand the application of moving target detection and tracking algorithms based on computer machine vision in this field. The objectives are realized by moving target detection and tracking, Gaussian mixture model, optimized correlation filtering algorithm, and Camshift tracking algorithm. Firstly, the Gaussian algorithm is introduced into target tracking and detection to reduce the filtering loss and make the acquired motion posture more accurate. Secondly, an improved kernel-related filter tracking algorithm is proposed by training multiple filters, which can clearly and accurately obtain the motion trajectory of the monitored target object. Finally, it is proposed to combine the Kalman algorithm with the Camshift algorithm for optimization, which can complete the tracking and recognition of moving targets. The experimental results show that the target tracking and detection method can obtain the movement form of the template object relatively completely, and the kernel-related filter tracking algorithm can also obtain the movement speed of the target object finely. In addition, the accuracy of Camshift tracking algorithm can reach 86.02%. Results of this study can provide reliable data support and reference for expanding the application of moving target detection and tracking methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Akbari Sekehravani, Ehsan, Eduard Babulak, and Mehdi Masoodi. "Flying object tracking and classification of military versus nonmilitary aircraft." Bulletin of Electrical Engineering and Informatics 9, no. 4 (August 1, 2020): 1394–403. http://dx.doi.org/10.11591/eei.v9i4.1843.

Full text
Abstract:
Tracking of moving objects in a sequence of images is one of the important and functional branches of machine vision technology. Detection and tracking of a flying object with unknown features are important issues in detecting and tracking objects. This paper consists of two basic parts. The first part involves tracking multiple flying objects. At first, flying objects are detected and tracked, using the particle filter algorithm. The second part is to classify tracked objects (military or nonmilitary), based on four criteria; Size (center of mass) of objects, object speed vector, the direction of motion of objects, and thermal imagery identifies the type of tracked flying objects. To demonstrate the efficiency and the strength of the algorithm and the above system, several scenarios in different videos have been investigated that include challenges such as the number of objects (aircraft), different paths, the diverse directions of motion, different speeds and various objects. One of the most important challenges is the speed of processing and the angle of imaging.
APA, Harvard, Vancouver, ISO, and other styles
9

Delforouzi, Ahmad, Bhargav Pamarthi, and Marcin Grzegorzek. "Training-Based Methods for Comparison of Object Detection Methods for Visual Object Tracking." Sensors 18, no. 11 (November 16, 2018): 3994. http://dx.doi.org/10.3390/s18113994.

Full text
Abstract:
Object tracking in challenging videos is a hot topic in machine vision. Recently, novel training-based detectors, especially using the powerful deep learning schemes, have been proposed to detect objects in still images. However, there is still a semantic gap between the object detectors and higher level applications like object tracking in videos. This paper presents a comparative study of outstanding learning-based object detectors such as ACF, Region-Based Convolutional Neural Network (RCNN), FastRCNN, FasterRCNN and You Only Look Once (YOLO) for object tracking. We use an online and offline training method for tracking. The online tracker trains the detectors with a generated synthetic set of images from the object of interest in the first frame. Then, the detectors detect the objects of interest in the next frames. The detector is updated online by using the detected objects from the last frames of the video. The offline tracker uses the detector for object detection in still images and then a tracker based on Kalman filter associates the objects among video frames. Our research is performed on a TLD dataset which contains challenging situations for tracking. Source codes and implementation details for the trackers are published to make both the reproduction of the results reported in this paper and the re-use and further development of the trackers for other researchers. The results demonstrate that ACF and YOLO trackers show more stability than the other trackers.
APA, Harvard, Vancouver, ISO, and other styles
10

Aziz, Nor Nadirah Abdul, Yasir Mohd Mustafah, Amelia Wong Azman, Amir Akramin Shafie, Muhammad Izad Yusoff, Nor Afiqah Zainuddin, and Mohammad Ariff Rashidan. "Features-Based Moving Objects Tracking for Smart Video Surveillances: A Review." International Journal on Artificial Intelligence Tools 27, no. 02 (March 2018): 1830001. http://dx.doi.org/10.1142/s0218213018300016.

Full text
Abstract:
Video surveillance is one of the most active research topics in the computer vision due to the increasing need for security. Although surveillance systems are getting cheaper, the cost of having human operators to monitor the video feed can be very expensive and inefficient. To overcome this problem, the automated visual surveillance system can be used to detect any suspicious activities that require immediate action. The framework of a video surveillance system encompasses a large scope in machine vision, they are background modelling, object detection, moving objects classification, tracking, motion analysis, and require fusion of information from the camera networks. This paper reviews recent techniques used by researchers for detection of moving object detection and tracking in order to solve many surveillance problems. The features and algorithms used for modelling the object appearance and tracking multiple objects in outdoor and indoor environment are also reviewed in this paper. This paper summarizes the recent works done by previous researchers in moving objects tracking for single camera view and multiple cameras views. Nevertheless, despite of the recent progress in surveillance technologies, there still are challenges that need to be solved before the system can come out with a reliable automated video surveillance.
APA, Harvard, Vancouver, ISO, and other styles
11

Ren, Zi Cheng, Jaeho Choi, M. Ahmed, and Jae Ho Choi. "Scale and Orientation Adaptive Moving Object Tracking in a Sequence of Imageries." Advanced Materials Research 660 (February 2013): 190–95. http://dx.doi.org/10.4028/www.scientific.net/amr.660.190.

Full text
Abstract:
Object tracking has been researched for many years as an important topic in machine learning, robot vision and many other fields. Over the years, various tracking methods are proposed and developed in order to gain a better tracking effect. Among them the mean-shift algorithm turns out to be robust and accurate compared other algorithms after different kinds of tests. But due to its limitations, the changes in scale and rotational motion of an object cannot be effectively processed. This problem occurs when the object of interest moves towards or away from the video camera. Improving over the previously proposed method such as scale and orientation adaptive mean shift tracking, which performs well with scaling change but not for the rotation, in this paper, the proposed method modifies the continuously adaptive mean shift tracking method so that it can handle effectively for changes in size and rotation in motion, simultaneously. The simulation results yield a successful tracking of moving objects even when the object undergoes scaling in size and rotation in motion in comparison to the conventional ones.
APA, Harvard, Vancouver, ISO, and other styles
12

Ekerol, H., and D. C. Hodgson. "A machine vision system for high speed object tracking using a moments algorithm." Mechatronics 2, no. 6 (December 1992): 555–65. http://dx.doi.org/10.1016/0957-4158(92)90044-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zou, Yanbiao, Jinchao Li, and Xiangzhi Chen. "Seam tracking investigation via striped line laser sensor." Industrial Robot: An International Journal 44, no. 5 (August 21, 2017): 609–17. http://dx.doi.org/10.1108/ir-11-2016-0294.

Full text
Abstract:
Purpose This paper aims to propose a set of six-axis robot arm welding seam tracking experiment platform based on Halcon machine vision library to resolve the curve seam tracking issue. Design/methodology/approach Robot-based and image coordinate systems are converted based on the mathematical model of the three-dimensional measurement of structured light vision and conversion relations between robot-based and camera coordinate systems. An object tracking algorithm via weighted local cosine similarity is adopted to detect the seam feature points to prevent effectively the interference from arc and spatter. This algorithm models the target state variable and corresponding observation vector within the Bayes framework and finds the optimal region with highest similarity to the image-selected modules using cosine similarity. Findings The paper tests the approach and the experimental results show that using metal inert-gas (MIG) welding with maximum welding current of 200A can achieve real-time accurate curve seam tracking under strong arc light and splash. Minimal distance between laser stripe and welding molten pool can reach 15 mm, and sensor sampling frequency can reach 50 Hz. Originality/value Designing a set of six-axis robot arm welding seam tracking experiment platform with a system of structured light sensor based on Halcon machine vision library; and adding an object tracking algorithm to seam tracking system to detect image feature points. By this technology, this system can track the curve seam while welding.
APA, Harvard, Vancouver, ISO, and other styles
14

Et. al., Mohan kumar Shilpa ,. "An Effective Framework Using Region Merging and Learning Machine for Shadow Detection and Removal." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 10, 2021): 2506–14. http://dx.doi.org/10.17762/turcomat.v12i2.2098.

Full text
Abstract:
Moving cast shadows of moving objects significantly degrade the performance of many high-level computer vision applications such as object tracking, object classification, behavior recognition and scene interpretation. Because they possess similar motion characteristics with their objects, moving cast shadow detection is still challenging. In this paper, the foreground is detected by background subtraction and the shadow is detected by combination of Mean-Shift and Region Merging Segmentation. Using Gabor method, we obtain the moving targets with texture features. According to the characteristics of shadow in HSV space and texture feature, the shadow is detected and removed to eliminate the shadow interference for the subsequent processing of moving targets. Finally, to guarantee the integrity of shadows and objects for further image processing, a simple post-processing procedure is designed to refine the results, which also drastically improves the accuracy of moving shadow detection. Extensive experiments on publicly common datasets that the performance of the proposed framework is superior to representative state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Gupta, Arpit. "Simulation and Detection of Small Drones/Suspicious UAVs in Drone Grid." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 5452–58. http://dx.doi.org/10.22214/ijraset.2021.36144.

Full text
Abstract:
Today’s technology is evolving towards autonomous systems and the demand in autonomous drones, cars, robots, etc. has increased drastically in the past years. This project presents a solution for autonomous real-time visual detection and tracking of hostile drones by moving cameras equipped on surveillance drones. The algorithm developed in this project, based on state-of-art machine learning and computer vision methods, succeeds at autonomously detecting and tracking a single drone by moving a camera and can run at real-time. The project can be divided into two main parts: the detection and the tracking. The detection is based on the YOLOv3 (You Only Look Once v3) algorithm and a sliding window method. The tracking is based on the GOTURN (Generic Object Tracking Using Regression Networks) algorithm, which allows the tracking of generic objects at high speed. In order to allow autonomous tracking and enhance the accuracy, a combination of GOTURN and tracking by detection using YOLOv3 was developed.
APA, Harvard, Vancouver, ISO, and other styles
16

Ciric, Ivan, Zarko Cojbasic, Vlastimir Nikolic, Tomislav Igic, and Branko Tursnek. "Intelligent optimal control of thermal vision-based Person-Following Robot Platform." Thermal Science 18, no. 3 (2014): 957–66. http://dx.doi.org/10.2298/tsci1403957c.

Full text
Abstract:
In this paper the supervisory control of the Person-Following Robot Platform is presented. The main part of the high level control loop of mobile robot platform is a real-time robust algorithm for human detection and tracking. The main goal was to enable mobile robot platform to recognize the person in indoor environment, and to localize it with accuracy high enough to allow adequate human-robot interaction. The developed computationally intelligent control algorithm enables robust and reliable human tracking by mobile robot platform. The core of the recognition methods proposed is genetic optimization of threshold segmentation and classification of detected regions of interests in every frame acquired by thermal vision camera. The support vector machine classifier determines whether the segmented object is human or not based on features extracted from the processed thermal image independently from current light conditions and in situations where no skin color is visible. Variation in temperature across same objects, air flow with different temperature gradients, person overlap while crossing each other and reflections, put challenges in thermal imaging and will have to be handled intelligently in order to obtain the efficient performance from motion tracking system.
APA, Harvard, Vancouver, ISO, and other styles
17

Ahmad, Misbah, Imran Ahmed, Fakhri Alam Khan, Fawad Qayum, and Hanan Aljuaid. "Convolutional neural network–based person tracking using overhead views." International Journal of Distributed Sensor Networks 16, no. 6 (June 2020): 155014772093473. http://dx.doi.org/10.1177/1550147720934738.

Full text
Abstract:
In video surveillance, person tracking is considered as challenging task. Numerous computer vision, machine and deep learning–based techniques have been developed in recent years. Majority of these techniques are based on frontal view images/video sequences. The advancement of convolutional neural network reforms the way of object tracking. The network layers of convolutional neural network models trained on a number of images or video sequences improve speed and accuracy of object tracking. In this work, the generalization performance of existing pre-trained deep learning models have investigated for overhead view person detection and tracking, under different experimental conditions. The object tracking method Generic Object Tracking Using Regression Networks (GOTURN) which has been yielding outstanding tracking results in recent years is explored for person tracking using overhead views. This work mainly focused on overhead view person tracking using Faster region convolutional neural network (Faster-RCNN) in combination with GOTURN architecture. In this way, the person is first identified in overhead view video sequences and then tracked using a GOTURN tracking algorithm. Faster-RCNN detection model achieved the true detection rate ranging from 90% to 93% with a minimum false detection rate up to 0.5%. The GOTURN tracking algorithm achieved similar results with the success rate ranging from 90% to 94%. Finally, the discussion is made on output results along with future direction.
APA, Harvard, Vancouver, ISO, and other styles
18

Yi, Yugen, Jiangyan Dai, Chengduan Wang, Jinkui Hou, Huihui Zhang, Yunlong Liu, and Jin Gao. "An Effective Framework Using Spatial Correlation and Extreme Learning Machine for Moving Cast Shadow Detection." Applied Sciences 9, no. 23 (November 22, 2019): 5042. http://dx.doi.org/10.3390/app9235042.

Full text
Abstract:
Moving cast shadows of moving objects significantly degrade the performance of many high-level computer vision applications such as object tracking, object classification, behavior recognition and scene interpretation. Because they possess similar motion characteristics with their objects, moving cast shadow detection is still challenging. In this paper, we present a novel moving cast-shadow detection framework based on the extreme learning machine (ELM) to efficiently distinguish shadow points from the foreground object. First, according to the physical model of shadows, pixel-level features of different channels in different color spaces and region-level features derived from the spatial correlation of neighboring pixels are extracted from the foreground. Second, an ELM-based classification model is developed by labelled shadow and un-shadow points, which is able to rapidly distinguish the points in the new input whether they belong to shadows or not. Finally, to guarantee the integrity of shadows and objects for further image processing, a simple post-processing procedure is designed to refine the results, which also drastically improves the accuracy of moving shadow detection. Extensive experiments on two publicly common datasets including 13 different scenes demonstrate that the performance of the proposed framework is superior to representative state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Wei, Zhao, and Wei YongBin. "Non-contact measurement method of bridge deflection based on machine vision." E3S Web of Conferences 261 (2021): 02001. http://dx.doi.org/10.1051/e3sconf/202126102001.

Full text
Abstract:
Bridge deflection is a very important parameter for bridge structure. It directly reflects the vertical overall stiffness of bridge structure, and is an important basis for reflecting the linear change of bridge. It is closely related to the bearing capacity of the bridge and the ability to resist earthquake and other dynamic loads. Computer vision is to obtain image data through image acquisition device, and use special image processing software to get the shape and displacement information of the object. This method has high precision, low cost, and simple operation. In this paper, based on the industrial camera, the optical flow method and sub-pixel corner detection algorithm are used for target tracking and sub-pixel detection. Through the comparison between the laboratory data and the micro pan tilt data, the usability of this method in bridge deflection monitoring is verified and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Shunli, Wei Lu, Weiwei Xing, and Li Zhang. "Using fuzzy least squares support vector machine with metric learning for object tracking." Pattern Recognition 84 (December 2018): 112–25. http://dx.doi.org/10.1016/j.patcog.2018.07.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wei, Bingsheng, and Martin Barczyk. "Experimental Evaluation of Computer Vision and Machine Learning-Based UAV Detection and Ranging." Drones 5, no. 2 (May 9, 2021): 37. http://dx.doi.org/10.3390/drones5020037.

Full text
Abstract:
We consider the problem of vision-based detection and ranging of a target UAV using the video feed from a monocular camera onboard a pursuer UAV. Our previously published work in this area employed a cascade classifier algorithm to locate the target UAV, which was found to perform poorly in complex background scenes. We thus study the replacement of the cascade classifier algorithm with newer machine learning-based object detection algorithms. Five candidate algorithms are implemented and quantitatively tested in terms of their efficiency (measured as frames per second processing rate), accuracy (measured as the root mean squared error between ground truth and detected location), and consistency (measured as mean average precision) in a variety of flight patterns, backgrounds, and test conditions. Assigning relative weights of 20%, 40% and 40% to these three criteria, we find that when flying over a white background, the top three performers are YOLO v2 (76.73 out of 100), Faster RCNN v2 (63.65 out of 100), and Tiny YOLO (59.50 out of 100), while over a realistic background, the top three performers are Faster RCNN v2 (54.35 out of 100, SSD MobileNet v1 (51.68 out of 100) and SSD Inception v2 (50.72 out of 100), leading us to recommend Faster RCNN v2 as the recommended solution. We then provide a roadmap for further work in integrating the object detector into our vision-based UAV tracking system.
APA, Harvard, Vancouver, ISO, and other styles
22

Bahraini, Masoud S., Ahmad B. Rad, and Mohammad Bozorg. "SLAM in Dynamic Environments: A Deep Learning Approach for Moving Object Tracking Using ML-RANSAC Algorithm." Sensors 19, no. 17 (August 26, 2019): 3699. http://dx.doi.org/10.3390/s19173699.

Full text
Abstract:
The important problem of Simultaneous Localization and Mapping (SLAM) in dynamic environments is less studied than the counterpart problem in static settings. In this paper, we present a solution for the feature-based SLAM problem in dynamic environments. We propose an algorithm that integrates SLAM with multi-target tracking (SLAMMTT) using a robust feature-tracking algorithm for dynamic environments. A novel implementation of RANdomSAmple Consensus (RANSAC) method referred to as multilevel-RANSAC (ML-RANSAC) within the Extended Kalman Filter (EKF) framework is applied for multi-target tracking (MTT). We also apply machine learning to detect features from the input data and to distinguish moving from stationary objects. The data stream from LIDAR and vision sensors are fused in real-time to detect objects and depth information. A practical experiment is designed to verify the performance of the algorithm in a dynamic environment. The unique feature of this algorithm is its ability to maintain tracking of features even when the observations are intermittent whereby many reported algorithms fail in such situations. Experimental validation indicates that the algorithm is able to perform consistent estimates in a fast and robust manner suggesting its feasibility for real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
23

López-Sastre, Roberto, Carlos Herranz-Perdiguero, Ricardo Guerrero-Gómez-Olmedo, Daniel Oñoro-Rubio, and Saturnino Maldonado-Bascón. "Boosting Multi-Vehicle Tracking with a Joint Object Detection and Viewpoint Estimation Sensor." Sensors 19, no. 19 (September 20, 2019): 4062. http://dx.doi.org/10.3390/s19194062.

Full text
Abstract:
In this work, we address the problem of multi-vehicle detection and tracking for traffic monitoring applications. We preset a novel intelligent visual sensor for tracking-by-detection with simultaneous pose estimation. Essentially, we adapt an Extended Kalman Filter (EKF) to work not only with the detections of the vehicles but also with their estimated coarse viewpoints, directly obtained with the vision sensor. We show that enhancing the tracking with observations of the vehicle pose, results in a better estimation of the vehicles trajectories. For the simultaneous object detection and viewpoint estimation task, we present and evaluate two independent solutions. One is based on a fast GPU implementation of a Histogram of Oriented Gradients (HOG) detector with Support Vector Machines (SVMs). For the second, we adequately modify and train the Faster R-CNN deep learning model, in order to recover from it not only the object localization but also an estimation of its pose. Finally, we publicly release a challenging dataset, the GRAM Road Traffic Monitoring (GRAM-RTM), which has been especially designed for evaluating multi-vehicle tracking approaches within the context of traffic monitoring applications. It comprises more than 700 unique vehicles annotated across more than 40.300 frames of three videos. We expect the GRAM-RTM becomes a benchmark in vehicle detection and tracking, providing the computer vision and intelligent transportation systems communities with a standard set of images, annotations and evaluation procedures for multi-vehicle tracking. We present a thorough experimental evaluation of our approaches with the GRAM-RTM, which will be useful for establishing further comparisons. The results obtained confirm that the simultaneous integration of vehicle localizations and pose estimations as observations in an EKF, improves the tracking results.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Li, and Hang Hu. "Research of Motion Tracking Based on CamShift Algorithm." Applied Mechanics and Materials 263-266 (December 2012): 2403–7. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2403.

Full text
Abstract:
At present, objects detection, identification and tracking according to machine vision technology, have been widely used in various economic aspects. In this paper, a real-time monitoring system was discussed. This surveillance system mainly combines two sub-systems, motion detection and objects tracking. An Adaptive Gaussian Background Model was established in order to automatically update the background and detect the outline of moving objects. By analyzing different algorithms, this paper brings out approaches to promote the performance.We proposed CamShift algorithm to complete motion detection and objects tracking, which applied for static background video sequences. And the experimental results show that our method can achieve the pre-determined targets.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Joy Iong-Zong, and Jen-Ting Chang. "Applying a 6-axis Mechanical Arm Combine with Computer Vision to the Research of Object Recognition in Plane Inspection." June 2020 2, no. 2 (May 18, 2020): 77–99. http://dx.doi.org/10.36548/jaicn.2020.2.002.

Full text
Abstract:
The study of a robotic arm copied with 3D-printer combines computer vision system with tracking algorithm is proposed in the paper. Moreover, the designing to the intelligent vehicle system with the integration of electromechanical for planning to apply it to the operations in various fields is presented too. The main purpose of this work tries to avoid the complicated process with traditional manual adjustment or teaching. It is expected to achieve the purpose that the robotic arm can grab the target automatically, classify the target and place it in the specified area, and even accurately realize the classification through training to distinguish the characteristics of the target. Eventually, the mechanical arm's movement behavior is able to be corrected through a real-time image data feedback control system. In words, with the experiment that the computer vision system is used to assist the robotic arm to detect the color and position of the target. By adding color features for algorithm training as well as through human-machine collaboration, which approves that the proposed algorithm has well known that the accuracy of target tracking definitely depends on both of two parameters include “object locations” and the “illustration direction” of light source. The difference will far from 75.2% to 89.0%.
APA, Harvard, Vancouver, ISO, and other styles
26

Winmalar D, Haretha, Vani A K, Sudharsan R, and Hari Krishna R. "Generalized Omnipresence Detection (GOD)." Journal of Innovative Image Processing 2, no. 2 (June 5, 2020): 85–92. http://dx.doi.org/10.36548/jiip.2020.2.003.

Full text
Abstract:
Identification and Tracking of a person in a video are useful in applications such as video surveillance. Two levels of tracking are carried out. They are Classification and monitoring of individuals. The human body’s color histogram is used as the basis for monitoring individuals. Our project can detect a human face in a video and store the detected facial features of the Local Binary Pattern Histogram (LBPH). In a video, once a person is detected, it automatically track that individual and assigns a label to that individual. We use the stored LBPH features to track him in any other videos. In this paper, we proposed and compared the efficiency of two algorithms. One constantly updates the background to make it suitable for illumination changes and other uses depth information with RGB. This is the first step in many complex algorithms in computer vision, such as identification of human activity and behavior recognition. The main challenges in human/object detection and tracking are changing illumination and background. Our work is based on image processing and also it learns the activities and stores them using machine learning with the help of OpenCV, an open source computer vision library.
APA, Harvard, Vancouver, ISO, and other styles
27

Konwar, Lakhyadeep, Anjan Kumar Talukdar, Kandarpa Kumar Sarma, Navajit Saikia, and Subhash Chandra Rajbangshi. "Segmentation and Selective Feature Extraction for Human Detection to the Direction of Action Recognition." International Journal of Circuits, Systems and Signal Processing 15 (September 8, 2021): 1371–86. http://dx.doi.org/10.46300/9106.2021.15.147.

Full text
Abstract:
Detection as well as classification of different object for machine vision application is a challenging task. Similar to the other object detection and classification task, human detection concept provides a major role for the ad- vancement in the design of an automatic visual surveillance system (AVSS). For the future automation system if it is possible to include human detection and tracking, human action recognition, usual as well as unusual event recognition etc. concept for future AVSS, it will be a greater success in the transformable world. In this paper we have proposed a proper human detection and tracking technique for human action recognition toward the design of AVSS. Here we use median filter for noise removal, graph cut for segment the human images, mathematical morphology to refine the segmentation mask, extract selective feature points by sing HOG, classify human objects by using SVM with polynomial ker- nel and finally particle filter for tracking those of detected human. Due to the above mentioned combinations our system can independent to the variations of lightening conditions, color, shape, size, clothing etc. and can handle the occlusion. Our system can easily detect and track human in different indoor as well as outdoor environ- ment with a automatic multiple human detection rate of 97:61% and total multiple human detection and tracking accuracy is about 92% for AVSS. Due to the use of HOG to extract features af- ter graph cut segmentation operation, our system requires less memory for store the trained data therefore processing speed as well as accuracy of detection and tracking will be better than other techniques which can be suitable for action classification task.
APA, Harvard, Vancouver, ISO, and other styles
28

LANG, HAOXIANG, and CLARENCE W. DE SILVA. "FAULT DIAGNOSIS OF AN INDUSTRIAL MACHINE THROUGH SENSOR FUSION." International Journal of Information Acquisition 05, no. 02 (June 2008): 93–110. http://dx.doi.org/10.1142/s0219878908001521.

Full text
Abstract:
In this paper, a four layer neuro-fuzzy architecture of multi-sensor fusion is developed for a fault diagnosis system which is applied to an industrial fish cutting machine. An important characteristic of the fault diagnosis approach developed in this paper is to make an accurate decision of the machine condition by fusing information acquired from three types of sensors: Accelerometer, microphone and charge-coupled device (CCD) camera. Feature vectors for vibration and sound signals from their fast Fourier transform (FFT) frequency spectra are defined and extracted from the acquired information. A feature-based vision method is applied for object tracking in the machine, to detect and track the fish moving on the conveyor. A four-layer neural network including a fuzzy hidden layer is developed in the paper to analyze and diagnose existing faults. Feature vectors of vibration, sound and vision are provided as inputs to the neuro-fuzzy network for fault detection and diagnosis. By proper training of the neural network using data samples for typical faults, six crucial faults in the fish cutting machine are detected with high reliability and robustness. On this basis, not only the condition of the machine can be determined for possible retuning and maintenance, but also alarms to warn about impending faults may be generated during the machine operation.
APA, Harvard, Vancouver, ISO, and other styles
29

Chang, Ching-Yuan, En-Chieh Chang, and Chi-Wen Huang. "In Situ Diagnosis of Industrial Motors by Using Vision-Based Smart Sensing Technology." Sensors 19, no. 24 (December 4, 2019): 5340. http://dx.doi.org/10.3390/s19245340.

Full text
Abstract:
This study uses machine vision, feature extraction, and support vector machine (SVM) to compose a vibration monitoring system (VMS) for an in situ evaluation of the performance of industrial motors. The vision-based system respectively offers a spatial and temporal resolution of 1.4 µm and 16.6 ms after the image calibration and the benchmark of a laser displacement sensor (LDS). The embedded program of machine vision has used zero-mean normalized correlation (ZNCC) and peak finding (PF) for tracking the registered characteristics on the object surface. The calibrated VMS provides time–displacement curves related to both horizontal and vertical directions, promising remote inspections of selected points without attaching additional markers or sensors. The experimental setup of the VMS is cost-effective and uncomplicated, supporting universal combinations between the imaging system and computational devices. The procedures of the proposed scheme are (1) setting up a digital camera, (2) calibrating the imaging system, (3) retrieving the data of image streaming, (4) executing the ZNCC criteria, and providing the time–displacement results of selected points. The experiment setup of the proposed VMS is straightforward and can cooperate with surveillances in industrial environments. The embedded program upgrades the functionality of the camera system from the events monitoring to remote measurement without the additional cost of attaching sensors on motors or targets. Edge nodes equipped with the image-tracking program serve as the physical layer and upload the extracted features to a cloud server via the wireless sensor network (WSN). The VMS can provide customized services under the architecture of the cyber–physical system (CPS), and this research offers an early warning alarm of the mechanical system before unexpected downtime. Based on the smart sensing technology, the in situ diagnosis of industrial motors given from the VMS enables preventative maintenance and contributes to the precision measurement of intelligent automation.
APA, Harvard, Vancouver, ISO, and other styles
30

Jose, John Anthony C., Meygen D. Cruz, Jefferson James U. Keh, Maverick Rivera, Edwin Sybingco, and Elmer P. Dadios. "Anno-Mate: Human–Machine Collaboration Features for Fast Annotation." Journal of Advanced Computational Intelligence and Intelligent Informatics 25, no. 4 (July 20, 2021): 404–9. http://dx.doi.org/10.20965/jaciii.2021.p0404.

Full text
Abstract:
Large annotated datasets are crucial for training deep machine learning models, but they are expensive and time-consuming to create. There are already numerous public datasets, but a vast amount of unlabeled data, especially video data, can still be annotated and leveraged to further improve the performance and accuracy of machine learning models. Therefore, it is essential to reduce the time and effort required to annotate a dataset to prevent bottlenecks in the development of this field. In this study, we propose Anno-Mate, a pair of features integrated into the Computer Vision Annotation Tool (CVAT). It facilitates human–machine collaboration and reduces the required human effort. Anno-Mate comprises Auto-Fit, which uses an EfficientDet-D0 backbone to tighten an existing bounding box around an object, and AutoTrack, which uses a channel and spatial reliability tracking (CSRT) tracker to draw a bounding box on the target object as it moves through the video frames. Both features exhibit a good speed and accuracy trade-off. Auto-Fit garnered an overall accuracy of 87% and an average processing time of 0.47 s, whereas the AutoTrack feature exhibited an overall accuracy of 74.29% and could process 18.54 frames per second. When combined, these features are proven to reduce the time required to annotate a minute of video by 26.56%.
APA, Harvard, Vancouver, ISO, and other styles
31

Ciccotelli, Joseph, Michel Dufaut, and René Husson. "Control of tracking systems by image correlation." Robotica 5, no. 3 (July 1987): 201–6. http://dx.doi.org/10.1017/s0263574700015848.

Full text
Abstract:
SUMMARYOwing to advances in machine vision, it is now possible to study automatic gripping of moving parts. This complex task requires a precise knowledge of the displacements of objects in a camera field.In this paper, a method to analyse the motion of parts is presented; it is based on the correlation of numerical images. The treatment of data provided by the image background makes this method quite original.The utilization of this method, often considered as rather awkward, makes it possible, in this case, to develop a position feedback operation of the robot actuators controlled in an open loop (step by step motors).
APA, Harvard, Vancouver, ISO, and other styles
32

Volkov, Vladimir Yu, Oleg A. Markelov, and Mikhail I. Bogachev. "IMAGE SEGMENTATION AND OBJECT SELECTION BASED ON MULTI-THRESHOLD PROCESSING." Journal of the Russian Universities. Radioelectronics 22, no. 3 (July 2, 2019): 24–35. http://dx.doi.org/10.32603/1993-8985-2019-22-3-24-35.

Full text
Abstract:
Introduction. Detection, isolation, selection and localization of variously shaped objects in images are essential in a variety of applications. Computer vision systems utilizing television and infrared cameras, synthetic aperture surveillance radars as well as laser and acoustic remote sensing systems are prominent examples. Such problems as object identification, tracking and matching as well as combining information from images available from different sources are essential. Objective. Design of image segmentation and object selection methods based on multi-threshold processing. Materials and methods. The segmentation methods are classified according to the objects they deal with, including (i) pixel-level threshold estimation and clustering methods, (ii) boundary detection methods, (iii) regional level, and (iv) other classifiers, including many non-parametric methods, such as machine learning, neural networks, fuzzy sets, etc. The keynote feature of the proposed approach is that the choice of the optimal threshold for the image segmentation among a variety of test methods is carried out using a posteriori information about the selection results. Results. The results of the proposed approach is compared against the results obtained using the well-known binary integration method. The comparison is carried out both using simulated objects with known shapes with additive synthesized noise as well as using observational remote sensing imagery. Conclusion. The article discusses the advantages and disadvantages of the proposed approach for the selection of objects in images, and provides recommendations for their use.
APA, Harvard, Vancouver, ISO, and other styles
33

Sadeghi-Niaraki, Abolghasem, and Soo-Mi Choi. "A Survey of Marker-Less Tracking and Registration Techniques for Health & Environmental Applications to Augmented Reality and Ubiquitous Geospatial Information Systems." Sensors 20, no. 10 (May 25, 2020): 2997. http://dx.doi.org/10.3390/s20102997.

Full text
Abstract:
Most existing augmented reality (AR) applications are suitable for cases in which only a small number of real world entities are involved, such as superimposing a character on a single surface. In this case, we only need to calculate pose of the camera relative to that surface. However, when an AR health or environmental application involves a one-to-one relationship between an entity in the real-world and the corresponding object in the computer model (geo-referenced object), we need to estimate the pose of the camera in reference to a common coordinate system for better geo-referenced object registration in the real-world. New innovations in developing cheap sensors, computer vision techniques, machine learning, and computing power have helped to develop applications with more precise matching between a real world and a virtual content. AR Tracking techniques can be divided into two subcategories: marker-based and marker-less approaches. This paper provides a comprehensive overview of marker-less registration and tracking techniques and reviews their most important categories in the context of ubiquitous Geospatial Information Systems (GIS) and AR focusing to health and environmental applications. Basic ideas, advantages, and disadvantages, as well as challenges, are discussed for each subcategory of tracking and registration techniques. We need precise enough virtual models of the environment for both calibrations of tracking and visualization. Ubiquitous GISs can play an important role in developing AR in terms of providing seamless and precise spatial data for outdoor (e.g., environmental applications) and indoor (e.g., health applications) environments.
APA, Harvard, Vancouver, ISO, and other styles
34

Gunay, Noel S., Elmer P. Dadios, Ryan Rhay P. Vicerra, Argel A. Bandala, and Laurence A. Gan Lim. "Synchronized Dual Camera Vision System for Locating and Identify Highly Dynamic Objects." Journal of Advanced Computational Intelligence and Intelligent Informatics 18, no. 5 (September 20, 2014): 776–83. http://dx.doi.org/10.20965/jaciii.2014.p0776.

Full text
Abstract:
This paper presents machine vision for locating and identifying 23 highly dynamic objects on 4.4 meters by 2.8 meters micro robot soccer playing field. The approach is based from the idea that the two camera vision subsystems should be synchronized and well informed in real time of the combined vision data and a selection of objects to track under each other’s camera view. A measure of effectiveness on using incremental tracking for two-camera operation is developed and is used to evaluate the introduced approach through experimentation. A real-time visualization of the whole playfield containing the 22 micro robots and a golf ball is also provided for the system operator to validate the objects’ actual poses with the vision system’s measurements. Results show that the proposed technique is very fast, accurate, reliable, and robust to external disturbances.
APA, Harvard, Vancouver, ISO, and other styles
35

Othman, Zuraini, Asmala Ahmad, Fauziah Kasmin, Sharifah Sakinah Syed Ahmad, Mohd Yazid Abu Sari, and Muhammad Amin Mustapha. "Comparison between Edge Detection Methods on UTeM Unmanned Arial Vehicles Images." MATEC Web of Conferences 150 (2018): 06029. http://dx.doi.org/10.1051/matecconf/201815006029.

Full text
Abstract:
Machine vision calls for the use of detectors to ascertain the features and type of object portrayed in the image. The employment of unmanned aerial vehicles (UAVs), which can function freely in active and precarious settings, is currently gaining momentum. These vehicles are mainly used for the detecting, classifying and tracking of an object. However, the achievement of these objectives necessitates the involvement of an effective edge detection procedure. Sobel, Canny, Prewitt and LoG are among the many edge detection procedures presently available. In this endeavour, we opted for the utilization of UTeM UAVs images for an evaluation of these edge detection procedures. During our investigations, the ground truth edge images were corroborated by a specialist in this field. The results obtained from these investigations revealed that in terms of accuracy, precision, sensitivity and f-measure, the Prewitt procedure outperforms the other methods mentioned.
APA, Harvard, Vancouver, ISO, and other styles
36

Pandian, A. Pasumpon. "Review on Image Recoloring Methods for Efficient Naturalness by Coloring Data Modeling Methods for Low Visual Deficiency." September 2021 3, no. 3 (August 27, 2021): 169–83. http://dx.doi.org/10.36548/jaicn.2021.3.002.

Full text
Abstract:
Recent research has discovered new applications for object tracking and identification by simulating the colour distribution of a homogeneous region. The colour distribution of an object is resilient when it is subjected to partial occlusion, scaling, and distortion. When rotated in depth, it may remain relatively stable in other applications. The challenging task in image recoloring is the identification of the dichromatic color appearance, which is remaining as a significant requirement in many recoloring imaging sectors. This research study provides three different vision descriptions for image recoloring methods, each with its own unique twist. The descriptions of protanopia, deuteranopia, and tritanopia may be incorporated and evaluated using parametric, machine learning, and reinforcement learning techniques, among others. Through the use of different image recoloring techniques, it has been shown that the supervised learning method outperforms other conventional methods based on performance measures such as naturalness index and feature similarity index (FSIM).
APA, Harvard, Vancouver, ISO, and other styles
37

Feng, Zhanshen. "An Image Detection Method Based on Parameter Optimization of Support Vector Machine." International Journal of Circuits, Systems and Signal Processing 15 (April 8, 2021): 306–14. http://dx.doi.org/10.46300/9106.2021.15.35.

Full text
Abstract:
With the progress and development of multimedia image processing technology, and the rapid growth of image data, how to efficiently extract the interesting and valuable information from the huge image data, and effectively filter out the redundant data, these have become an urgent problem in the field of image processing and computer vision. In recent years, as one of the important branches of computer vision, image detection can assist and improve a series of visual processing tasks. It has been widely used in many fields, such as scene classification, visual tracking, object redirection, semantic segmentation and so on. Intelligent algorithms have strong non-linear mapping capability, data processing capacity and generalization ability. Support vector machine (SVM) by using the structural risk minimization principle constructs the optimal classification hyper-plane in the attribute space to make the classifier get the global optimum and has the expected risk meet a certain upper bound at a certain probability in the entire sample space. This paper combines SVM and artificial fish swarm algorithm (AFSA) for parameter optimization, builds AFSA-SVM classification model to achieve the intelligent identification of image features, and provides reliable technological means to accelerate sensing technology. The experiment result proves that AFSA-SVM has better classification accuracy and indicates that the algorithm of this paper can effectively realize the intelligent identification of image features.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Qi, Boxue Zhang, Shuchang Lyu, Hong Zhang, Daniel Sun, Guoqiang Li, and Wenquan Feng. "A CNN-SIFT Hybrid Pedestrian Navigation Method Based on First-Person Vision." Remote Sensing 10, no. 8 (August 5, 2018): 1229. http://dx.doi.org/10.3390/rs10081229.

Full text
Abstract:
The emergence of new wearable technologies, such as action cameras and smart glasses, has driven the use of the first-person perspective in computer applications. This field is now attracting the attention and investment of researchers aiming to develop methods to process first-person vision (FPV) video. The current approaches present particular combinations of different image features and quantitative methods to accomplish specific objectives, such as object detection, activity recognition, user–machine interaction, etc. FPV-based navigation is necessary in some special areas, where Global Position System (GPS) or other radio-wave strength methods are blocked, and is especially helpful for visually impaired people. In this paper, we propose a hybrid structure with a convolutional neural network (CNN) and local image features to achieve FPV pedestrian navigation. A novel end-to-end trainable global pooling operator, called AlphaMEX, has been designed to improve the scene classification accuracy of CNNs. A scale-invariant feature transform (SIFT)-based tracking algorithm is employed for movement estimation and trajectory tracking of the person through each frame of FPV images. Experimental results demonstrate the effectiveness of the proposed method. The top-1 error rate of the proposed AlphaMEX-ResNet outperforms the original ResNet (k = 12) by 1.7% on the ImageNet dataset. The CNN-SIFT hybrid pedestrian navigation system reaches 0.57 m average absolute error, which is an adequate accuracy for pedestrian navigation. Both positions and movements can be well estimated by the proposed pedestrian navigation algorithm with a single wearable camera.
APA, Harvard, Vancouver, ISO, and other styles
39

Oyekan, John, Axel Fischer, Windo Hutabarat, Christopher Turner, and Ashutosh Tiwari. "Utilising low cost RGB-D cameras to track the real time progress of a manual assembly sequence." Assembly Automation 40, no. 6 (November 7, 2019): 925–39. http://dx.doi.org/10.1108/aa-06-2018-078.

Full text
Abstract:
Purpose The purpose of this paper is to explore the role that computer vision can play within new industrial paradigms such as Industry 4.0 and in particular to support production line improvements to achieve flexible manufacturing. As Industry 4.0 requires “big data”, it is accepted that computer vision could be one of the tools for its capture and efficient analysis. RGB-D data gathered from real-time machine vision systems such as Kinect ® can be processed using computer vision techniques. Design/methodology/approach This research exploits RGB-D cameras such as Kinect® to investigate the feasibility of using computer vision techniques to track the progress of a manual assembly task on a production line. Several techniques to track the progress of a manual assembly task are presented. The use of CAD model files to track the manufacturing tasks is also outlined. Findings This research has found that RGB-D cameras can be suitable for object recognition within an industrial environment if a number of constraints are considered or different devices/techniques combined. Furthermore, through the use of a HMM inspired state-based workflow, the algorithm presented in this paper is computationally tractable. Originality/value Processing of data from robust and cheap real-time machine vision systems could bring increased understanding of production line features. In addition, new techniques that enable the progress tracking of manual assembly sequences may be defined through the further analysis of such visual data. The approaches explored within this paper make a contribution to the utilisation of visual information “big data” sets for more efficient and automated production.
APA, Harvard, Vancouver, ISO, and other styles
40

Maltezos, Evangelos, Athanasios Douklias, Aris Dadoukis, Fay Misichroni, Lazaros Karagiannidis, Markos Antonopoulos, Katerina Voulgary, Eleftherios Ouzounoglou, and Angelos Amditis. "The INUS Platform: A Modular Solution for Object Detection and Tracking from UAVs and Terrestrial Surveillance Assets." Computation 9, no. 2 (January 29, 2021): 12. http://dx.doi.org/10.3390/computation9020012.

Full text
Abstract:
Situational awareness is a critical aspect of the decision-making process in emergency response and civil protection and requires the availability of up-to-date information on the current situation. In this context, the related research should not only encompass developing innovative single solutions for (real-time) data collection, but also on the aspect of transforming data into information so that the latter can be considered as a basis for action and decision making. Unmanned systems (UxV) as data acquisition platforms and autonomous or semi-autonomous measurement instruments have become attractive for many applications in emergency operations. This paper proposes a multipurpose situational awareness platform by exploiting advanced on-board processing capabilities and efficient computer vision, image processing, and machine learning techniques. The main pillars of the proposed platform are: (1) a modular architecture that exploits unmanned aerial vehicle (UAV) and terrestrial assets; (2) deployment of on-board data capturing and processing; (3) provision of geolocalized object detection and tracking events; and (4) a user-friendly operational interface for standalone deployment and seamless integration with external systems. Experimental results are provided using RGB and thermal video datasets and applying novel object detection and tracking algorithms. The results show the utility and the potential of the proposed platform, and future directions for extension and optimization are presented.
APA, Harvard, Vancouver, ISO, and other styles
41

Ciric, Ivan, Zarko Cojbasic, Danijela Ristic-Durrant, Vlastimir Nikolic, Milica Ciric, Milos Simonovic, and Ivan Pavlovic. "Thermal vision based intelligent system for human detection and tracking in mobile robot control system." Thermal Science 20, suppl. 5 (2016): 1553–59. http://dx.doi.org/10.2298/tsci16s5553c.

Full text
Abstract:
This paper presents the results of the authors in thermal vision based mobile robot control. The most important segment of the high level control loop of mobile robot platform is an intelligent real-time algorithm for human detection and tracking. Temperature variations across same objects, air flow with different temperature gradients, reflections, person overlap while crossing each other, and many other non-linearities, uncertainty and noise, put challenges in thermal image processing and therefore the need of computationally intelligent algorithms for obtaining the efficient performance from human motion tracking system. The main goal was to enable mobile robot platform or any technical system to recognize the person in indoor environment, localize it and track it with accuracy high enough to allow adequate human-machine interaction. The developed computationally intelligent algorithms enables robust and reliable human detection and tracking based on neural network classifier and autoregressive neural network for time series prediction. Intelligent algorithm used for thermal image segmentation gives accurate inputs for classification.
APA, Harvard, Vancouver, ISO, and other styles
42

Opromolla, Roberto, Giuseppe Inchingolo, and Giancarmine Fasano. "Airborne Visual Detection and Tracking of Cooperative UAVs Exploiting Deep Learning." Sensors 19, no. 19 (October 7, 2019): 4332. http://dx.doi.org/10.3390/s19194332.

Full text
Abstract:
The performance achievable by using Unmanned Aerial Vehicles (UAVs) for a large variety of civil and military applications, as well as the extent of applicable mission scenarios, can significantly benefit from the exploitation of formations of vehicles able to fly in a coordinated manner (swarms). In this respect, visual cameras represent a key instrument to enable coordination by giving each UAV the capability to visually monitor the other members of the formation. Hence, a related technological challenge is the development of robust solutions to detect and track cooperative targets through a sequence of frames. In this framework, this paper proposes an innovative approach to carry out this task based on deep learning. Specifically, the You Only Look Once (YOLO) object detection system is integrated within an original processing architecture in which the machine-vision algorithms are aided by navigation hints available thanks to the cooperative nature of the formation. An experimental flight test campaign, involving formations of two multirotor UAVs, is conducted to collect a database of images suitable to assess the performance of the proposed approach. Results demonstrate high-level accuracy, and robustness against challenging conditions in terms of illumination, background and target-range variability.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Shu Shan, and Juan P. Wachs. "The Improvement and Application of Intelligence Tracking Algorithm for Moving Logistics Objects Based on Machine Vision Sensor." Sensor Letters 11, no. 5 (May 1, 2013): 862–69. http://dx.doi.org/10.1166/sl.2013.2658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Luo, ZongWei, Martin Lai, Mary Cheung, ShuiHua Han, Tianle Zhang, Zhongjun Luo, James Ting, et al. "Developing Local Association Network Based IoT Solutions for Body Parts Tagging and Tracking." International Journal of Systems and Service-Oriented Engineering 1, no. 4 (October 2010): 42–64. http://dx.doi.org/10.4018/jssoe.2010100104.

Full text
Abstract:
Traditional Internet is commonly wired with machine to machine persistent connections. Evolving towards mobile and wireless pervasive networks, Internet has to entertain dynamic, transient, and changing interconnections. The vision of the Internet of Things furthers technology development by creating an interactive environment where smart objects are connected and can sense and react to the environment. Adopting such an innovative technology often requires extensive intelligence research. A major value indicator is how the potentials of RFID can translate into actions to improve business operational efficiency (Luo et al., 2008). In this paper, the authors will introduce a local association network with a coordinated P2P message delivery mechanism to develop Internet of Things based solutions body parts tagging and tracking. On site testing and performance evaluation validate the proposed approach. User feedback strengthens the belief that the proposed approach would help facilitate the technology adoption in body parts tagging and tracking.
APA, Harvard, Vancouver, ISO, and other styles
45

Merrad, Yacine, Mohamed Hadi Habaebi, Md Rafiqul Islam, and Teddy Surya Gunawan. "A Real-time Mobile Notification System for Inventory Stock out Detection using SIFT and RANSAC." International Journal of Interactive Mobile Technologies (iJIM) 14, no. 05 (April 7, 2020): 32. http://dx.doi.org/10.3991/ijim.v14i05.13315.

Full text
Abstract:
<p>Object detection and tracking is one of the most relevant computer technologies related to computer vision and image processing. It may mean the detection of an object within a frame and classify it (human, animal, vehicle, building, etc) by the use of some algorithms. It may also be the detection of a reference object within different frames (under different angles, different scales, etc.). The applications of the object detection and tracking are numerous; most of them are in the security field. It is also used in our daily life applications, especially in developing and enhancing business management. Inventory or stock management is one of these applications. It is considered to be an important process in warehousing and storage business because it allows for stock in and stock out products control. The stock-out situation, however, is a very serious issue that can be detrimental to the bottom line of any business. It causes an increased risk of lost sales as well as it leads to reduced customer satisfaction and lowered loyalty levels. On this note, a smart solution for stock-out detection in warehouses is proposed in this paper, to automate the process using inventory management software. The proposed method is a machine learning based real-time notification system using the exciting Scale Invariant Feature Transform feature detector (SIFT) and Random Sample Consensus (RANSAC) algorithms. Consequently, the comparative study shows the overall good performance of the system achieving 100% detection accuracy with features’ rich model and 90% detection accuracy with features’ poor model, indicating the viability of the proposed solution.</p>
APA, Harvard, Vancouver, ISO, and other styles
46

Taha, Mohammed Yaseen, and Qahhar Muhammad Qadir. "Cost Effective and Easily Configurable Indoor Navigation System." UKH Journal of Science and Engineering 5, no. 1 (June 30, 2021): 60–72. http://dx.doi.org/10.25079/ukhjse.v5n1y2021.pp60-72.

Full text
Abstract:
With the advent of Industry 4.0, the trend of its implementation in current factories has increased tremendously. Using autonomous mobile robots that are capable of navigating and handling material in a warehouse is one of the important pillars to convert the current warehouse inventory control to more automated and smart processes to be aligned with Industry 4.0 needs. Navigating a robot’s indoor positioning in addition to finding materials are examples of location-based services (LBS), and are some major aspects of Industry 4.0 implementation in warehouses that should be considered. Global positioning satellites (GPS) are accurate and reliable for outdoor navigation and positioning while they are not suitable for indoor use. Indoor positioning systems (IPS) have been proposed in order to overcome this shortcoming and extend this valuable service to indoor navigation and positioning. This paper proposes a simple, cost effective and easily configurable indoor navigation system with the help of an optical path following, unmanned ground vehicle (UGV) robot augmented by image processing and computer vision deep machine learning algorithms. The proposed system prototype is capable of navigating in a warehouse as an example of an indoor area, by tracking and following a predefined traced path that covers all inventory zones in a warehouse, through the usage of infrared reflective sensors that can detect black traced path lines on bright ground. As metionded before, this general navigation mechanism is augmented and enhanced by artificial intelligence (AI) computer vision tasks to be able to select the path to the required inventory zone as its destination, and locate the requested material within this inventory zone. The adopted AI computer vision tasks that are used in the proposed prototype are deep machine learning object recognition algorithms for path selection and quick response (QR) detection.
APA, Harvard, Vancouver, ISO, and other styles
47

Pham, Tuan-Anh, and Myungsik Yoo. "Nighttime Vehicle Detection and Tracking with Occlusion Handling by Pairing Headlights and Taillights." Applied Sciences 10, no. 11 (June 8, 2020): 3986. http://dx.doi.org/10.3390/app10113986.

Full text
Abstract:
In recent years, vision-based vehicle detection has received considerable attention in the literature. Depending on the ambient illuminance, vehicle detection methods are classified as daytime and nighttime detection methods. In this paper, we propose a nighttime vehicle detection and tracking method with occlusion handling based on vehicle lights. First, bright blobs that may be vehicle lights are segmented in the captured image. Then, a machine learning-based method is proposed to classify whether the bright blobs are headlights, taillights, or other illuminant objects. Subsequently, the detected vehicle lights are tracked to further facilitate the determination of the vehicle position. As one vehicle is indicated by one or two light pairs, a light pairing process using spatiotemporal features is applied to pair vehicle lights. Finally, vehicle tracking with occlusion handling is applied to refine incorrect detections under various traffic situations. Experiments on two-lane and four-lane urban roads are conducted, and a quantitative evaluation of the results shows the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
48

Uddin, Machbah, Hira Lal Gope, Md Sayeed Iftekhar Yousuf, Dilshad Islam, and Mohammad Khairul Islam. "Crowd Detection in Still Images Using Combined HOG and SIFT Feature." Indonesian Journal of Electrical Engineering and Computer Science 4, no. 2 (November 1, 2016): 447. http://dx.doi.org/10.11591/ijeecs.v4.i2.pp447-458.

Full text
Abstract:
<p>Person detection and tracking in crowd is a challenging task. We detect the head region and based on this head region we can detect people from crowd. Individual object detection has been improved significantly in recent times but the crowd detection and tracking contains some challenges. Crowd analysis is a highly focused area for law enforcement, urban engineering and traffic management. There are a lot of incident occurred in crowd area during some fabulous event. In this research low resolution and verities of image orientation is a key factor as well as overlapping person images in crowd misguided the system. An enhanced system of interest point detection based on gradient orientation information as well as improved feature extraction HOG is used for identifying the human head or face from crowd. We have analyzed different types of images in different varieties and found accuracy 88-90%. In a number of applications, such as document analysis and some industrial machine vision tasks, binary images can be used as the input to algorithms that perform useful tasks. These algorithms can handle tasks ranging from very simple counting tasks to much more complex recognition, localization, and inspection tasks. Thus by studying binary image analysis before going on to gray-tone and color images, one can gain insight into the entire image analysis process.</p>
APA, Harvard, Vancouver, ISO, and other styles
49

HAN, LONG, XINYU WU, YONGSHENG OU, YEN-LUN CHEN, CHUNJIE CHEN, and YANGSHENG XU. "HOUSEHOLD SERVICE ROBOT WITH CELLPHONE INTERFACE." International Journal of Information Acquisition 09, no. 02 (June 2013): 1350009. http://dx.doi.org/10.1142/s0219878913500095.

Full text
Abstract:
In this paper, an efficient and low-cost cellphone-commandable mobile manipulation system is described. Aiming at house and elderly caring, this system can be easily commanded through common cellphone network to efficiently grasp objects in household environment, utilizing several low-cost off-the-shelf devices. Unlike the visual servo technology using high quality vision system with high cost, the household-service robot may not afford to such high quality vision servo system, and thus it is essential to use some of low-cost device. However, it is extremely challenging to have the said vision for precise localization, as well as motion control. To tackle this challenge, we developed a realtime vision system with which a reliable grasping algorithm combining machine vision, robotic kinematics and motor control technology is presented. After the target is captured by the arm camera, the arm camera keeps tracking the target while the arm keeps stretching until the end effector reaches the target. However, if the target is not captured by the arm camera, the arm will take a move to help the arm camera capture the target under the guidance of the head camera. This algorithm is implemented on two robot systems: the one with a fixed base and another with a mobile base. The results demonstrated the feasibility and efficiency of the algorithm and system we developed, and the study shown in this paper is of significance in developing a service robot in modern household environment.
APA, Harvard, Vancouver, ISO, and other styles
50

Bouteraa, Yassine, and Ismail Ben Abdallah. "A gesture-based telemanipulation control for a robotic arm with biofeedback-based grasp." Industrial Robot: An International Journal 44, no. 5 (August 21, 2017): 575–87. http://dx.doi.org/10.1108/ir-12-2016-0356.

Full text
Abstract:
Purpose The idea is to exploit the natural stability and performance of the human arm during movement, execution and manipulation. The purpose of this paper is to remotely control a handling robot with a low cost but effective solution. Design/methodology/approach The developed approach is based on three different techniques to be able to ensure movement and pattern recognition of the operator’s arm as well as an effective control of the object manipulation task. In the first, the methodology works on the kinect-based gesture recognition of the operator’s arm. However, using only the vision-based approach for hand posture recognition cannot be the suitable solution mainly when the hand is occluded in such situations. The proposed approach supports the vision-based system by an electromyography (EMG)-based biofeedback system for posture recognition. Moreover, the novel approach appends to the vision system-based gesture control and the EMG-based posture recognition a force feedback to inform operator of the real grasping state. Findings The main finding is to have a robust method able to gesture-based control a robot manipulator during movement, manipulation and grasp. The proposed approach uses a real-time gesture control technique based on a kinect camera that can provide the exact position of each joint of the operator’s arm. The developed solution integrates also an EMG biofeedback and a force feedback in its control loop. In addition, the authors propose a high-friendly human-machine-interface (HMI) which allows user to control in real time a robotic arm. Robust trajectory tracking challenge has been solved by the implementation of the sliding mode controller. A fuzzy logic controller has been implemented to manage the grasping task based on the EMG signal. Experimental results have shown a high efficiency of the proposed approach. Research limitations/implications There are some constraints when applying the proposed method, such as the sensibility of the desired trajectory generated by the human arm even in case of random and unwanted movements. This can damage the manipulated object during the teleoperation process. In this case, such operator skills are highly required. Practical implications The developed control approach can be used in all applications, which require real-time human robot cooperation. Originality/value The main advantage of the developed approach is that it benefits at the same time of three various techniques: EMG biofeedback, vision-based system and haptic feedback. In such situation, using only vision-based approaches mainly for the hand postures recognition is not effective. Therefore, the recognition should be based on the biofeedback naturally generated by the muscles responsible of each posture. Moreover, the use of force sensor in closed-loop control scheme without operator intervention is ineffective in the special cases in which the manipulated objects vary in a wide range with different metallic characteristics. Therefore, the use of human-in-the-loop technique can imitate the natural human postures in the grasping task.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography