Academic literature on the topic 'Machine vision; Object tracking'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Machine vision; Object tracking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Machine vision; Object tracking"

1

Patil, Rupali, Adhish Velingkar, Mohammad Nomaan Parmar, Shubham Khandhar, and Bhavin Prajapati. "Machine Vision Enabled Bot for Object Tracking." JINAV: Journal of Information and Visualization 1, no. 1 (October 1, 2020): 15–26. http://dx.doi.org/10.35877/454ri.jinav155.

Full text
Abstract:
Object detection and tracking are essential and testing undertaking in numerous PC vision appliances. To distinguish the object first find a way to accumulate information. In this design, the robot can distinguish the item and track it just as it can turn left and right position and afterward push ahead and in reverse contingent on the object motion. It keeps up the consistent separation between the item and the robot. We have designed a webpage that is used to display a live feed from the camera and the camera can be controlled by the user efficiently. Implementation of machine learning is done for detection purposes along with open cv and creating cloud storage. The pan-tilt mechanism is used for camera control which is attached to our 3-wheel chassis robot through servo motors. This idea can be used for surveillance purposes, monitoring local stuff, and human-machine interaction.
APA, Harvard, Vancouver, ISO, and other styles
2

Llano, Christian R., Yuan Ren, and Nazrul I. Shaikh. "Object Detection and Tracking in Real Time Videos." International Journal of Information Systems in the Service Sector 11, no. 2 (April 2019): 1–17. http://dx.doi.org/10.4018/ijisss.2019040101.

Full text
Abstract:
Object and human tracking in streaming videos are one of the most challenging problems in vision computing. In this article, we review some relevant machine learning algorithms and techniques for human identification and tracking in videos. We provide details on metrics and methods used in the computer vision literature for monitoring and propose a state-space representation of the object tracking problem. A proof of concept implementation of the state-space based object tracking using particle filters is presented as well. The proposed approach enables tracking objects/humans in a video, including foreground/background separation for object movement detection.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xiao Jing, Chen Ming Sha, and Ya Jie Yue. "A Fast Object Tracking Approach in Vision Application." Applied Mechanics and Materials 513-517 (February 2014): 3265–68. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3265.

Full text
Abstract:
Object tracking has always been a hot issue in vision application, its application area include video surveillance, human-machine, virtual reality and so on. In this paper, we introduce the Mean shift tracking algorithm, which is a kind of important no parameters estimation method, then we evaluate the tracking performance of Mean shift algorithm on different video sequences.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Liyun. "Moving Object Detection Technology of Line Dancing Based on Machine Vision." Mobile Information Systems 2021 (April 26, 2021): 1–9. http://dx.doi.org/10.1155/2021/9995980.

Full text
Abstract:
In this paper, line dancing's moving object detection technology based on machine vision is studied to improve object detection. For this purpose, the improved frame difference for the background modeling technique is combined with the target detection algorithm. The moving target is extracted, and the postmorphological processing is carried out to make the target detection more accurate. Based on this, the tracking target is determined on the time axis of the moving target tracking stage, the position of the target in each frame is found, and the most similar target is found in each frame of the video sequence. The association relationship is established to determine a moving object template or feature. Through certain measurement criteria, the mean-shift algorithm is used to search the optimal candidate target in the image frame and carry out the corresponding matching to realize moving objects' tracking. This method can detect the moving targets of line dancing in various areas through the experimental analysis, which will not be affected by the position or distance, and always has a more accurate detection effect.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Yongqing, and Yanzhou Zhang. "OBJECT TRACKING BASED ON MACHINE VISION AND IMPROVED SVDD ALGORITHM." International Journal on Smart Sensing and Intelligent Systems 8, no. 1 (2015): 677–96. http://dx.doi.org/10.21307/ijssis-2017-778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jun, Mao. "Object Detection and Recognition Algorithm of Moving UAV Based on Machine Vision." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 7731–37. http://dx.doi.org/10.1166/jctn.2016.5770.

Full text
Abstract:
To conquer disadvantages of slow speed of target tracking algorithm in original distribution field as well as easiness of being caught in local optimal solution, one target tracking algorithm of real-time distribution field based on global matching is presented in the Thesis, thus remarkably improving performance of target tracking algorithm in distribution field. In proposed algorithm, relevant correlation coefficients will be used to substitute the similarity between target distribution filed of original L1 norm measurement and candidate distribution field. As a consequence, the target search process can concert from time domain operation to operation processing of frequency domain among which the latter has lower computation complexity and capability of global search of target position so as to conquer disadvantages such as randomness caused by sparse sampling and that the gradient descent of target tracking algorithm in original distribution field is liable to be caught in local optimal solution. In 12 challenging video sequences, compared with multiple-instance learning and tracking algorithm and tracking algorithm of original distribution field, the method proposed in the Thesis has acquired the optimum performance in tracking accuracy, success rate and speed.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Zheng, Cong Huang, Fei Zhong, Bote Qi, and Binghong Gao. "Posture Recognition and Behavior Tracking in Swimming Motion Images under Computer Machine Vision." Complexity 2021 (May 20, 2021): 1–9. http://dx.doi.org/10.1155/2021/5526831.

Full text
Abstract:
This study is to explore the gesture recognition and behavior tracking in swimming motion images under computer machine vision and to expand the application of moving target detection and tracking algorithms based on computer machine vision in this field. The objectives are realized by moving target detection and tracking, Gaussian mixture model, optimized correlation filtering algorithm, and Camshift tracking algorithm. Firstly, the Gaussian algorithm is introduced into target tracking and detection to reduce the filtering loss and make the acquired motion posture more accurate. Secondly, an improved kernel-related filter tracking algorithm is proposed by training multiple filters, which can clearly and accurately obtain the motion trajectory of the monitored target object. Finally, it is proposed to combine the Kalman algorithm with the Camshift algorithm for optimization, which can complete the tracking and recognition of moving targets. The experimental results show that the target tracking and detection method can obtain the movement form of the template object relatively completely, and the kernel-related filter tracking algorithm can also obtain the movement speed of the target object finely. In addition, the accuracy of Camshift tracking algorithm can reach 86.02%. Results of this study can provide reliable data support and reference for expanding the application of moving target detection and tracking methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Akbari Sekehravani, Ehsan, Eduard Babulak, and Mehdi Masoodi. "Flying object tracking and classification of military versus nonmilitary aircraft." Bulletin of Electrical Engineering and Informatics 9, no. 4 (August 1, 2020): 1394–403. http://dx.doi.org/10.11591/eei.v9i4.1843.

Full text
Abstract:
Tracking of moving objects in a sequence of images is one of the important and functional branches of machine vision technology. Detection and tracking of a flying object with unknown features are important issues in detecting and tracking objects. This paper consists of two basic parts. The first part involves tracking multiple flying objects. At first, flying objects are detected and tracked, using the particle filter algorithm. The second part is to classify tracked objects (military or nonmilitary), based on four criteria; Size (center of mass) of objects, object speed vector, the direction of motion of objects, and thermal imagery identifies the type of tracked flying objects. To demonstrate the efficiency and the strength of the algorithm and the above system, several scenarios in different videos have been investigated that include challenges such as the number of objects (aircraft), different paths, the diverse directions of motion, different speeds and various objects. One of the most important challenges is the speed of processing and the angle of imaging.
APA, Harvard, Vancouver, ISO, and other styles
9

Delforouzi, Ahmad, Bhargav Pamarthi, and Marcin Grzegorzek. "Training-Based Methods for Comparison of Object Detection Methods for Visual Object Tracking." Sensors 18, no. 11 (November 16, 2018): 3994. http://dx.doi.org/10.3390/s18113994.

Full text
Abstract:
Object tracking in challenging videos is a hot topic in machine vision. Recently, novel training-based detectors, especially using the powerful deep learning schemes, have been proposed to detect objects in still images. However, there is still a semantic gap between the object detectors and higher level applications like object tracking in videos. This paper presents a comparative study of outstanding learning-based object detectors such as ACF, Region-Based Convolutional Neural Network (RCNN), FastRCNN, FasterRCNN and You Only Look Once (YOLO) for object tracking. We use an online and offline training method for tracking. The online tracker trains the detectors with a generated synthetic set of images from the object of interest in the first frame. Then, the detectors detect the objects of interest in the next frames. The detector is updated online by using the detected objects from the last frames of the video. The offline tracker uses the detector for object detection in still images and then a tracker based on Kalman filter associates the objects among video frames. Our research is performed on a TLD dataset which contains challenging situations for tracking. Source codes and implementation details for the trackers are published to make both the reproduction of the results reported in this paper and the re-use and further development of the trackers for other researchers. The results demonstrate that ACF and YOLO trackers show more stability than the other trackers.
APA, Harvard, Vancouver, ISO, and other styles
10

Aziz, Nor Nadirah Abdul, Yasir Mohd Mustafah, Amelia Wong Azman, Amir Akramin Shafie, Muhammad Izad Yusoff, Nor Afiqah Zainuddin, and Mohammad Ariff Rashidan. "Features-Based Moving Objects Tracking for Smart Video Surveillances: A Review." International Journal on Artificial Intelligence Tools 27, no. 02 (March 2018): 1830001. http://dx.doi.org/10.1142/s0218213018300016.

Full text
Abstract:
Video surveillance is one of the most active research topics in the computer vision due to the increasing need for security. Although surveillance systems are getting cheaper, the cost of having human operators to monitor the video feed can be very expensive and inefficient. To overcome this problem, the automated visual surveillance system can be used to detect any suspicious activities that require immediate action. The framework of a video surveillance system encompasses a large scope in machine vision, they are background modelling, object detection, moving objects classification, tracking, motion analysis, and require fusion of information from the camera networks. This paper reviews recent techniques used by researchers for detection of moving object detection and tracking in order to solve many surveillance problems. The features and algorithms used for modelling the object appearance and tracking multiple objects in outdoor and indoor environment are also reviewed in this paper. This paper summarizes the recent works done by previous researchers in moving objects tracking for single camera view and multiple cameras views. Nevertheless, despite of the recent progress in surveillance technologies, there still are challenges that need to be solved before the system can come out with a reliable automated video surveillance.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Machine vision; Object tracking"

1

Case, Isaac. "Automatic object detection and tracking in video /." Online version of thesis, 2010. http://hdl.handle.net/1850/12332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Clarke, John Christopher. "Applications of sequence geometry to visual motion." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tydén, Amanda, and Sara Olsson. "Edge Machine Learning for Animal Detection, Classification, and Tracking." Thesis, Linköpings universitet, Reglerteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166572.

Full text
Abstract:
A research field currently advancing is the use of machine learning on camera trap data, yet few explore deep learning for camera traps to be run in real-time. A camera trap has the purpose to capture images of bypassing animals and is traditionally based only on motion detection. This work integrates machine learning on the edge device to also perform object detection. Related research is brought up and model tests are performed with a focus on the trade-off regarding inference speed and model accuracy. Transfer learning is used to utilize pre-trained models and thus reduce training time and the amount of training data. Four models with slightly different architecture are compared to evaluate which model performs best for the use case. The models tested are SSD MobileNet V2, SSD Inception V2, and SSDLite MobileNet V2, SSD MobileNet V2 quantized. Since the client-side usage of the model, the SSD MobileNet V2 was finally selected due to a satisfying trade-off between inference speed and accuracy. Even though it is less accurate in its detections, its ability to detect more images per second makes it outperform the more accurate Inception network in object tracking. A contribution of this work is a light-weight tracking solution using tubelet proposal. This work further discusses the open set recognition problem, where just a few object classes are of interest while many others are present. The subject of open set recognition influences data collection and evaluation tests, it is however left for further work to research how to integrate support for open set recognition in object detection models. The proposed system handles detection, classification, and tracking of animals in the African savannah, and has potential for real usage as it produces meaningful events
APA, Harvard, Vancouver, ISO, and other styles
4

Stigson, Magnus. "Object Tracking Using Tracking-Learning-Detection inThermal Infrared Video." Thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93936.

Full text
Abstract:
Automatic tracking of an object of interest in a video sequence is a task that has been much researched. Difficulties include varying scale of the object, rotation and object appearance changing over time, thus leading to tracking failures. Different tracking methods, such as short-term tracking often fail if the object steps out of the camera’s field of view, or changes shape rapidly. Also, small inaccuracies in the tracking method can accumulate over time, which can lead to tracking drift. Long-term tracking is also problematic, partly due to updating and degradation of the object model, leading to incorrectly classified and tracked objects. This master’s thesis implements a long-term tracking framework called Tracking-Learning-Detection which can learn and adapt, using so called P/N-learning, to changing object appearance over time, thus making it more robust to tracking failures. The framework consists of three parts; a tracking module which follows the object from frame to frame, a learning module that learns new appearances of the object, and a detection module which can detect learned appearances of the object and correct the tracking module if necessary. This tracking framework is evaluated on thermal infrared videos and the results are compared to the results obtained from videos captured within the visible spectrum. Several important differences between visual and thermal infrared tracking are presented, and the effect these have on the tracking performance is evaluated. In conclusion, the results are analyzed to evaluate which differences matter the most and how they affect tracking, and a number of different ways to improve the tracking are proposed.
APA, Harvard, Vancouver, ISO, and other styles
5

Patrick, Ryan Stewart. "Surveillance in a Smart Home Environment." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1278508516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moujtahid, Salma. "Exploiting scene context for on-line object tracking in unconstrained environments." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI110/document.

Full text
Abstract:
Avec le besoin grandissant pour des modèles d’analyse automatiques de vidéos, le suivi visuel d’objets est devenu une tache primordiale dans le domaine de la vision par ordinateur. Un algorithme de suivi dans un environnement non contraint fait face à de nombreuses difficultés: changements potentiels de la forme de l’objet, du fond, de la luminosité, du mouvement de la camera, et autres. Dans cette configuration, les méthodes classiques de soustraction de fond ne sont pas adaptées, on a besoin de méthodes de détection d’objet plus discriminantes. De plus, la nature de l’objet est a priori inconnue dans les méthodes de tracking génériques. Ainsi, les modèles d’apparence d’objets appris off-ligne ne peuvent être utilisés. L’évolution récente d’algorithmes d’apprentissage robustes a permis le développement de nouvelles méthodes de tracking qui apprennent l’apparence de l’objet de manière en ligne et s’adaptent aux variables contraintes en temps réel. Dans cette thèse, nous démarrons par l’observation que différents algorithmes de suivi ont différentes forces et faiblesses selon l’environnement et le contexte. Afin de surmonter les variables contraintes, nous démontrons que combiner plusieurs modalités et algorithmes peut améliorer considérablement la performance du suivi global dans les environnements non contraints. Plus concrètement, nous introduisant dans un premier temps un nouveau framework de sélection de trackers utilisant un critère de cohérence spatio-temporel. Dans ce framework, plusieurs trackers indépendants sont combinés de manière parallèle, chacun d’entre eux utilisant des features bas niveau basée sur différents aspects visuels complémentaires tel que la couleur, la texture. En sélectionnant de manière récurrente le tracker le plus adaptée à chaque trame, le système global peut switcher rapidement entre les différents tracker selon les changements dans la vidéo. Dans la seconde contribution de la thèse, le contexte de scène est utilisé dans le mécanisme de sélection de tracker. Nous avons conçu des features visuelles, extrait de l’image afin de caractériser les différentes conditions et variations de scène. Un classifieur (réseau de neurones) est appris grâce à ces features de scène dans le but de prédire à chaque instant le tracker qui performera le mieux sous les conditions de scènes données. Ce framework a été étendu et amélioré d’avantage en changeant les trackers individuels et optimisant l’apprentissage. Finalement, nous avons commencé à explorer une perspective intéressante où, au lieu d’utiliser des features conçu manuellement, nous avons utilisé un réseau de neurones convolutif dans le but d’apprendre automatiquement à extraire ces features de scène directement à partir de l’image d’entrée et prédire le tracker le plus adapté. Les méthodes proposées ont été évaluées sur plusieurs benchmarks publiques, et ont démontré que l’utilisation du contexte de scène améliore la performance globale du suivi d’objet
With the increasing need for automated video analysis, visual object tracking became an important task in computer vision. Object tracking is used in a wide range of applications such as surveillance, human-computer interaction, medical imaging or vehicle navigation. A tracking algorithm in unconstrained environments faces multiple challenges : potential changes in object shape and background, lighting, camera motion, and other adverse acquisition conditions. In this setting, classic methods of background subtraction are inadequate, and more discriminative methods of object detection are needed. Moreover, in generic tracking algorithms, the nature of the object is not known a priori. Thus, off-line learned appearance models for specific types of objects such as faces, or pedestrians can not be used. Further, the recent evolution of powerful machine learning techniques enabled the development of new tracking methods that learn the object appearance in an online manner and adapt to the varying constraints in real time, leading to very robust tracking algorithms that can operate in non-stationary environments to some extent. In this thesis, we start from the observation that different tracking algorithms have different strengths and weaknesses depending on the context. To overcome the varying challenges, we show that combining multiple modalities and tracking algorithms can considerably improve the overall tracking performance in unconstrained environments. More concretely, we first introduced a new tracker selection framework using a spatial and temporal coherence criterion. In this algorithm, multiple independent trackers are combined in a parallel manner, each of them using low-level features based on different complementary visual aspects like colour, texture and shape. By recurrently selecting the most suitable tracker, the overall system can switch rapidly between different tracking algorithms with specific appearance models depending on the changes in the video. In the second contribution, the scene context is introduced to the tracker selection. We designed effective visual features, extracted from the scene context to characterise the different image conditions and variations. At each point in time, a classifier is trained based on these features to predict the tracker that will perform best under the given scene conditions. We further improved this context-based framework and proposed an extended version, where the individual trackers are changed and the classifier training is optimised. Finally, we started exploring one interesting perspective that is the use of a Convolutional Neural Network to automatically learn to extract these scene features directly from the input image and predict the most suitable tracker
APA, Harvard, Vancouver, ISO, and other styles
7

Skjong, Espen, and Stian Aas Nundal. "Tracking objects with fixed-wing UAV using model predictive control and machine vision." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-25990.

Full text
Abstract:
This thesis describes the development of an object tracking system for unmanned aerial vehicles (UAVs), intended to be used for search and rescue (SAR) missions. The UAV is equipped with a two-axis gimbal system, which houses an infrared (IR) camera used to detect and track objects of interest, and a lower level autopilot. An external computer vision (CV) module is assumed implemented and connected to the object tracking system, providing object positions and velocities to the control system. The realization of the object tracking system includes the design and assembly of the UAV’s payload, the design and implementation of a model predictive controller (MPC), embedded in a larger control environment, and the design and implementation of a human machine interface (HMI). The HMI allows remote control of the object tracking system from a ground control station. A toolkit for realizing optimal control problems (OCP), MPC and moving horizon estimators (MHE), called ACADO, is used. To gain real-time communication between all system modules, an asynchronous multi-threaded running environment, with interface to external HMIs, the CV module, the autopilot and external control systems, was implemented. In addition to the IR camera, a color still camera is mounted in the payload, intended for capturing high definition images of objects of interest and relaying the images to the operator on the ground. By using the center of the IR camera image projected down on earth, together with the UAV’s and the objects’ positions, the MPC is used to calculateway-points, path planning for the UAV, and gimbal attitude, which are used as control actions to the autopilot and the gimbal. Communication between the control system and the autopilot is handled by DUNE. If multiple objects are located and are to be tracked, the control system utilizes an object selection algorithm that determines which object to track depending on the distance between the UAV and each object. If multiple objects are clustered together, the object selection algorithm can choose to track all the clustered objects simultaneously. The object selection algorithm features dynamic object clustering, which is capable of tracking multiple moving objects. The system was tested in simulations, where suitable ACADO parameters were found through experimentation. Important requirements for the ACADO parameters are smooth gimbal control, an efficient UAV path and acceptable time consumption. The implemented HMI gives the operator access to live camera streams, the ability to alter system parameters and manually control the gimbal. The object tracking system was tested using hardware-in-loop (HIL) testing, and the results were encouraging. During the first flight of the UAV, without the payload on-board, the autopilot exhibited erroneous behavior and the UAV was grounded. A solution to the problem was not found in time to conduct any further flight tests during this thesis. A prototype for a three-axis stabilized brushless gimbal was designed and 3D printed. This was as a result of the two-axis gimbal system’s limited stabilizationcapabilities, small range of movement and seemingly fragile construction. Out of a suspected need for damping to improve image quality from the still camera, the process of designing and prototyping a wire vibration isolator camera mount was started. Further work and testing is required to realize both the gimbal and dampened camera mount. The lack of flight tests prohibited the completion of the object tracking system.Keywords: object tracking system, unmanned aerial vehicle (UAV), search and rescue,two-axis gimbal system, infrared (IR) camera, computer vision (CV), model predictivecontrol (MPC), control environment, human machine interface (HMI), remote control, ground control, ACADO, real-time, asynchronous multi-threaded running environment, way-point, path planning, DUNE, dynamic object clustering, multiple moving objects, hardware-in-loop (HIL), three-axis stabilized brushless gimbal, wire vibration isolator
APA, Harvard, Vancouver, ISO, and other styles
8

Adeboye, Taiyelolu. "Robot Goalkeeper : A robotic goalkeeper based on machine vision and motor control." Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27561.

Full text
Abstract:
This report shows a robust and efficient implementation of a speed-optimized algorithm for object recognition, 3D real world location and tracking in real time. It details a design that was focused on detecting and following objects in flight as applied to a football in motion. An overall goal of the design was to develop a system capable of recognizing an object and its present and near future location while also actuating a robotic arm in response to the motion of the ball in flight. The implementation made use of image processing functions in C++, NVIDIA Jetson TX1, Sterolabs’ ZED stereoscopic camera setup in connection to an embedded system controller for the robot arm. The image processing was done with a textured background and the 3D location coordinates were applied to the correction of a Kalman filter model that was used for estimating and predicting the ball location. A capture and processing speed of 59.4 frames per second was obtained with good accuracy in depth detection while the ball was well tracked in the tests carried out.
APA, Harvard, Vancouver, ISO, and other styles
9

Barkman, Richard Dan William. "Object Tracking Achieved by Implementing Predictive Methods with Static Object Detectors Trained on the Single Shot Detector Inception V2 Network." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-73313.

Full text
Abstract:
In this work, the possibility of realising object tracking by implementing predictive methods with static object detectors is explored. The static object detectors are obtained as models trained on a machine learning algorithm, or in other words, a deep neural network. Specifically, it is the single shot detector inception v2 network that will be used to train such models. Predictive methods will be incorporated to the end of improving the obtained models’ precision, i.e. their performance with respect to accuracy. Namely, Lagrangian mechanics will be employed to derived equations of motion for three different scenarios in which the object is to be tracked. These equations of motion will be implemented as predictive methods by discretising and combining them with four different iterative formulae. In ch. 1, the fundamentals of supervised machine learning, neural networks, convolutional neural networks as well as the workings of the single shot detector algorithm, approaches to hyperparameter optimisation and other relevant theory is established. This includes derivations of the relevant equations of motion and the iterative formulae with which they were implemented. In ch. 2, the experimental set-up that was utilised during data collection, and the manner by which the acquired data was used to produce training, validation and test datasets is described. This is followed by a description of how the approach of random search was used to train 64 models on 300×300 datasets, and 32 models on 512×512 datasets. Consecutively, these models are evaluated based on their performance with respect to camera-to-object distance and object velocity. In ch. 3, the trained models were verified to possess multi-scale detection capabilities, as is characteristic of models trained on the single shot detector network. While the former is found to be true irrespective of the resolution-setting of the dataset that the model has been trained on, it is found that the performance with respect to varying object velocity is significantly more consistent for the lower resolution models as they operate at a higher detection rate. Ch. 3 continues with that the implemented predictive methods are evaluated. This is done by comparing the resulting deviations when they are let to predict the missing data points from a collected detection pattern, with varying sampling percentages. It is found that the best predictive methods are those that make use of the least amount of previous data points. This followed from that the data upon which evaluations were made contained an unreasonable amount of noise, considering that the iterative formulae implemented do not take noise into account. Moreover, the lower resolution models were found to benefit more than those trained on the higher resolution datasets because of the higher detection frequency they can employ. In ch. 4, it is argued that the concept of combining predictive methods with static object detectors to the end of obtaining an object tracker is promising. Moreover, the models obtained on the single shot detector network are concluded to be good candidates for such applications. However, the predictive methods studied in this thesis should be replaced with some method that can account for noise, or be extended to be able to account for it. A profound finding is that the single shot detector inception v2 models trained on a low-resolution dataset were found to outperform those trained on a high-resolution dataset in certain regards due to the higher detection rate possible on lower resolution frames. Namely, in performance with respect to object velocity and in that predictive methods performed better on the low-resolution models.
I detta arbete undersöks möjligheten att åstadkomma objektefterföljning genom att implementera prediktiva metoder med statiska objektdetektorer. De statiska objektdetektorerna erhålls som modeller tränade på en maskininlärnings-algoritm, det vill säga djupa neurala nätverk. Specifikt så är det en modifierad version av entagningsdetektor-nätverket, så kallat entagningsdetektor inception v2 nätverket, som används för att träna modellerna. Prediktiva metoder inkorporeras sedan för att förbättra modellernas förmåga att kunna finna ett eftersökt objekt. Nämligen används Lagrangiansk mekanik för härleda rörelseekvationer för vissa scenarion i vilka objektet är tänkt att efterföljas. Rörelseekvationerna implementeras genom att låta diskretisera dem och därefter kombinera dem med fyra olika iterationsformler. I kap. 2 behandlas grundläggande teori för övervakad maskininlärning, neurala nätverk, faltande neurala nätverk men också de grundläggande principer för entagningsdetektor-nätverket, närmanden till hyperparameter-optimering och övrig relevant teori. Detta inkluderar härledningar av rörelseekvationerna och de iterationsformler som de skall kombineras med. I kap. 3 så redogörs för den experimentella uppställning som användes vid datainsamling samt hur denna data användes för att producera olika data set. Därefter följer en skildring av hur random search kunde användas för att träna 64 modeller på data av upplösning 300×300 och 32 modeller på data av upplösning 512×512. Vidare utvärderades modellerna med avseende på deras prestanda för varierande kamera-till-objekt avstånd och objekthastighet. I kap. 4 så verifieras det att modellerna har en förmåga att detektera på flera skalor, vilket är ett karaktäristiskt drag för modeller tränade på entagninsdetektor-nätverk. Medan detta gällde för de tränade modellerna oavsett vilken upplösning av data de blivit tränade på, så fanns detekteringsprestandan med avseende på objekthastighet vara betydligt mer konsekvent för modellerna som tränats på data av lägre upplösning. Detta resulterade av att dessa modeller kan arbeta med en högre detekteringsfrekvens. Kap. 4 fortsätter med att de prediktiva metoderna utvärderas, vilket de kunde göras genom att jämföra den resulterande avvikelsen de respektive metoderna innebar då de läts arbeta på ett samplat detektionsmönster, sparat från då en tränad modell körts. I och med denna utvärdering så testades modellerna för olika samplingsgrader. Det visade sig att de bästa iterationsformlerna var de som byggde på färre tidigare datapunkter. Anledningen för detta är att den insamlade data, som testerna utfördes på, innehöll en avsevärd mängd brus. Med tanke på att de implementerade iterationsformlerna inte tar hänsyn till brus, så fick detta avgörande konsekvenser. Det fanns även att alla prediktiva metoder förbättrade objektdetekteringsförmågan till en högre utsträckning för modellerna som var tränade på data av lägre upplösning, vilket följer från att de kan arbeta med en högre detekteringsfrekvens. I kap. 5, argumenteras det, bland annat, för att konceptet att kombinera prediktiva metoder med statiska objektdetektorer för att åstadkomma objektefterföljning är lovande. Det slutleds även att modeller som erhålls från entagningsdetektor-nätverket är lovande kandidater för detta applikationsområde, till följd av deras höga detekteringsfrekvenser och förmåga att kunna detektera på flera skalor. Metoderna som användes för att förutsäga det efterföljda föremålets position fanns vara odugliga på grund av deras oförmåga att kunna hantera brus. Det slutleddes därmed att dessa antingen bör utökas till att kunna hantera brus eller ersättas av lämpligare metoder. Den mest väsentliga slutsats detta arbete presenterar är att lågupplösta entagninsdetektormodeller utgör bättre kandidater än de tränade på data av högre upplösning till följd av den ökade detekteringsfrekvens de erbjuder.
APA, Harvard, Vancouver, ISO, and other styles
10

Ozertem, Kemal Arda. "Vision-assisted Object Tracking." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614073/index.pdf.

Full text
Abstract:
In this thesis, a video tracking method is proposed that is based on both computer vision and estimation theory. For this purpose, the overall study is partitioned into four related subproblems. The first part is moving object detection
for moving object detection, two different background modeling methods are developed. The second part is feature extraction and estimation of optical flow between video frames. As the feature extraction method, a well-known corner detector algorithm is employed and this extraction is applied only at the moving regions in the scene. For the feature points, the optical flow vectors are calculated by using an improved version of Kanade Lucas Tracker. The resulting optical flow field between consecutive frames is used directly in proposed tracking method. In the third part, a particle filter structure is build to provide tracking process. However, the particle filter is improved by adding optical flow data to the state equation as a correction term. In the last part of the study, the performance of the proposed approach is compared against standard implementations particle filter based trackers. Based on the simulation results in this study, it could be argued that insertion of vision-based optical flow estimation to tracking formulation improves the overall performance.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Machine vision; Object tracking"

1

IEEE Workshop on Multi-Object Tracking (2001 Vancouver, B.C.). Proceedings: 2001 IEEE workshop on multi-object tracking : July 8, 2001, Vancouver, British Columbia, Canada. Los Alamitos, Calif: IEEE Computer Society, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Video segmentation and its applications. New York: Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

2001 IEEE Workshop on Multi-Object Tracking: Proceedings. Institute of Electrical & Electronics Enginee, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Taylor, Geoffrey, and Lindsay Kleeman. Visual Perception and Robotic Manipulation: 3D Object Recognition, Tracking and Hand-Eye Coordination. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

J, Tarr Michael, and Bülthoff Heinrich H, eds. Object recognition in man, monkey, and machine. Cambridge, Mass: MIT Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Loui, Alexander Chan Pong. A morphological approach to moving-object recognition with application to machine vision. 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Visual Perception and Robotic Manipulation: 3D Object Recognition, Tracking and Hand-Eye Coordination (Springer Tracts in Advanced Robotics). Springer, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chakraborty, Shouvik, and Kalyani Mali. Applications of Advanced Machine Intelligence in Computer Vision and Object Recognition: Emerging Research and Opportunities. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chakraborty, Shouvik, and Kalyani Mali. Applications of Advanced Machine Intelligence in Computer Vision and Object Recognition: Emerging Research and Opportunities. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chakraborty, Shouvik, and Kalyani Mali. Applications of Advanced Machine Intelligence in Computer Vision and Object Recognition: Emerging Research and Opportunities. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Machine vision; Object tracking"

1

Roberti, Flavio, Juan Marcos Toibero, Jorge A. Sarapura, Víctor Andaluz, Ricardo Carelli, and José María Sebastián. "Unified Passivity-Based Visual Control for Moving Object Tracking." In Machine Vision and Navigation, 347–87. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22587-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Hien Van, Amit Banerjee, Philippe Burlina, Joshua Broadwater, and Rama Chellappa. "Tracking and Identification via Object Reflectance Using a Hyperspectral Video Camera." In Machine Vision Beyond Visible Spectrum, 201–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-11568-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bhattacharya, Subhabrata, Haroon Idrees, Imran Saleemi, Saad Ali, and Mubarak Shah. "Moving Object Detection and Tracking in Forward Looking Infra-Red Aerial Imagery." In Machine Vision Beyond Visible Spectrum, 221–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-11568-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schlemmer, Matthias J., Georg Biegelbauer, and Markus Vincze. "An Integration Concept for Vision-Based Object Handling: Shape-Capture, Detection and Tracking." In Advances in Machine Vision, Image Processing, and Pattern Analysis, 215–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11821045_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sánchez-Nielsen, Elena, and Mario Hernández-Tejera. "Tracking Deformable Objects with Evolving Templates for Real-Time Machine Vision." In Pattern Recognition, Machine Intelligence and Biometrics, 213–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22407-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Verghese, Gilbert, Karey Lynch Gale, and Charles R. Dyer. "Real-Time, Parallel Motion Tracking of Three Dimensional Objects From Spatiotemporal Sequences." In Parallel Algorithms for Machine Intelligence and Vision, 310–39. New York, NY: Springer New York, 1990. http://dx.doi.org/10.1007/978-1-4612-3390-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bourezak, Rafik, and Guillaume-Alexandre Bilodeau. "Iterative Division and Correlograms for Detection and Tracking of Moving Objects." In Advances in Machine Vision, Image Processing, and Pattern Analysis, 46–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11821045_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Davies, E. Roy. "Object Location Using the HOUGH Transform." In Machine Vision Handbook, 773–800. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-84996-169-1_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Batchelor, Bruce G. "Appendix F: Object Location and Orientation." In Machine Vision Handbook, 2063–85. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-84996-169-1_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Geng, Zejian Yuan, Nanning Zheng, Xingdong Sheng, and Tie Liu. "Visual Saliency Based Object Tracking." In Computer Vision – ACCV 2009, 193–203. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12304-7_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Machine vision; Object tracking"

1

Zheng, Feng, Ling Shao, and James Brownjohn. "Learn++ for Robust Object Tracking." In British Machine Vision Conference 2014. British Machine Vision Association, 2014. http://dx.doi.org/10.5244/c.28.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yan, Wang, Xiaoye Han, and Vladimir Pavlovic. "Structured Learning for Multiple Object Tracking." In British Machine Vision Conference 2012. British Machine Vision Association, 2012. http://dx.doi.org/10.5244/c.26.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Milletari, Fausto, Wadim Kehl, Federico Tombari, Slobodan Ilic, Seyed-Ahmad Ahmadi, and Nassir Navab. "Universal Hough dictionaries for object tracking." In British Machine Vision Conference 2015. British Machine Vision Association, 2015. http://dx.doi.org/10.5244/c.29.122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Donoser, M., and H. Bischof. "Fast Non-Rigid Object Boundary Tracking." In British Machine Vision Conference 2008. British Machine Vision Association, 2008. http://dx.doi.org/10.5244/c.22.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Masson, L., F. Jurie, and M. Dhome. "Tracking 3D Object using Flexible Models." In British Machine Vision Conference 2005. British Machine Vision Association, 2005. http://dx.doi.org/10.5244/c.19.37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gaidon, Adrien, and Eleonora Vig. "Online Domain Adaptation for Multi-Object Tracking." In British Machine Vision Conference 2015. British Machine Vision Association, 2015. http://dx.doi.org/10.5244/c.29.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Jinshan, Jongwoo Lim, and Ming-Hsuan Yang. "L0-Regularized Object Representation for Visual Tracking." In British Machine Vision Conference 2014. British Machine Vision Association, 2014. http://dx.doi.org/10.5244/c.28.29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Javan Roshtkhari, Mehrsan, and Martin Levine. "Multiple Object Tracking Using Local Motion Patterns." In British Machine Vision Conference 2014. British Machine Vision Association, 2014. http://dx.doi.org/10.5244/c.28.92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Luo, Wenhan, and Tae-Kyun Kim. "Generic Object Crowd Tracking by Multi-Task Learning." In British Machine Vision Conference 2013. British Machine Vision Association, 2013. http://dx.doi.org/10.5244/c.27.73.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ye, Mingquan, Hong Chang, and Xilin Chen. "Online Visual Tracking via Coupled Object-Context Dictionary." In British Machine Vision Conference 2015. British Machine Vision Association, 2015. http://dx.doi.org/10.5244/c.29.165.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Machine vision; Object tracking"

1

Stamler, Zachary. Methods for Object Tracking With Machine Vision. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.7507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography