Academic literature on the topic 'Visual tracking'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual tracking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual tracking"

1

ZANG, Chuantao, Yoshihide ENDO, and Koichi HASHIMOTO. "2P1-D20 GPU accelerating visual tracking." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2010 (2010): _2P1—D20_1—_2P1—D20_4. http://dx.doi.org/10.1299/jsmermd.2010._2p1-d20_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Roberts, J., and D. Charnley. "Parallel Visual Tracking." IFAC Proceedings Volumes 26, no. 1 (April 1993): 127–32. http://dx.doi.org/10.1016/s1474-6670(17)49287-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yuan, Heng, Wen-Tao Jiang, Wan-Jun Liu, and Sheng-Chong Zhang. "Visual node prediction for visual tracking." Multimedia Systems 25, no. 3 (January 30, 2019): 263–72. http://dx.doi.org/10.1007/s00530-019-00603-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lou, Jianguang, Tieniu Tan, and Weiming Hu. "Visual vehicle tracking algorithm." Electronics Letters 38, no. 18 (2002): 1024. http://dx.doi.org/10.1049/el:20020692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ming Yang, Ying Wu, and Gang Hua. "Context-Aware Visual Tracking." IEEE Transactions on Pattern Analysis and Machine Intelligence 31, no. 7 (July 2009): 1195–209. http://dx.doi.org/10.1109/tpami.2008.146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Lei, Yanjie Wang, Honghai Sun, Zhijun Yao, and Shuwen He. "Robust Visual Correlation Tracking." Mathematical Problems in Engineering 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/238971.

Full text
Abstract:
Recent years have seen greater interests in the tracking-by-detection methods in the visual object tracking, because of their excellent tracking performance. But most existing methods fix the scale which makes the trackers unreliable to handle large scale variations in complex scenes. In this paper, we decompose the tracking into target translation and scale prediction. We adopt a scale estimation approach based on the tracking-by-detection framework, develop a new model update scheme, and present a robust correlation tracking algorithm with discriminative correlation filters. The approach works by learning the translation and scale correlation filters. We obtain the target translation and scale by finding the maximum output response of the learned correlation filters and then online update the target models. Extensive experiments results on 12 challenging benchmark sequences show that the proposed tracking approach reduces the average center location error (CLE) by 6.8 pixels, significantly improves the performance by 17.5% in the average success rate (SR) and by 5.4% in the average distance precision (DP) compared to the second best one of the other five excellent existing tracking algorithms, and is robust to appearance variations introduced by scale variations, pose variations, illumination changes, partial occlusion, fast motion, rotation, and background clutter.
APA, Harvard, Vancouver, ISO, and other styles
7

Roberts, J. M., and D. Charnley. "Parallel attentive visual tracking." Engineering Applications of Artificial Intelligence 7, no. 2 (April 1994): 205–15. http://dx.doi.org/10.1016/0952-1976(94)90024-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Hesheng, Yun-Hui Liu, and Weidong Chen. "Uncalibrated Visual Tracking Control Without Visual Velocity." IEEE Transactions on Control Systems Technology 18, no. 6 (November 2010): 1359–70. http://dx.doi.org/10.1109/tcst.2010.2041457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Liangtao, Bineng Zhong, Qihua Liang, Ning Li, Shengping Zhang, and Xianxian Li. "Explicit Visual Prompts for Visual Object Tracking." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (March 24, 2024): 4838–46. http://dx.doi.org/10.1609/aaai.v38i5.28286.

Full text
Abstract:
How to effectively exploit spatio-temporal information is crucial to capture target appearance changes in visual tracking. However, most deep learning-based trackers mainly focus on designing a complicated appearance model or template updating strategy, while lacking the exploitation of context between consecutive frames and thus entailing the when-and-how-to-update dilemma. To address these issues, we propose a novel explicit visual prompts framework for visual tracking, dubbed EVPTrack. Specifically, we utilize spatio-temporal tokens to propagate information between consecutive frames without focusing on updating templates. As a result, we cannot only alleviate the challenge of when-to-update, but also avoid the hyper-parameters associated with updating strategies. Then, we utilize the spatio-temporal tokens to generate explicit visual prompts that facilitate inference in the current frame. The prompts are fed into a transformer encoder together with the image tokens without additional processing. Consequently, the efficiency of our model is improved by avoiding how-to-update. In addition, we consider multi-scale information as explicit visual prompts, providing multiscale template features to enhance the EVPTrack's ability to handle target scale changes. Extensive experimental results on six benchmarks (i.e., LaSOT, LaSOText, GOT-10k, UAV123, TrackingNet, and TNL2K.) validate that our EVPTrack can achieve competitive performance at a real-time speed by effectively exploiting both spatio-temporal and multi-scale information. Code and models are available at https://github.com/GXNU-ZhongLab/EVPTrack.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yue, Huibin Lu, and Xingwang Du. "ROAM-based visual tracking method." Journal of Physics: Conference Series 1732 (January 2021): 012064. http://dx.doi.org/10.1088/1742-6596/1732/1/012064.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Visual tracking"

1

Danelljan, Martin. "Visual Tracking." Thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-105659.

Full text
Abstract:
Visual tracking is a classical computer vision problem with many important applications in areas such as robotics, surveillance and driver assistance. The task is to follow a target in an image sequence. The target can be any object of interest, for example a human, a car or a football. Humans perform accurate visual tracking with little effort, while it remains a difficult computer vision problem. It imposes major challenges, such as appearance changes, occlusions and background clutter. Visual tracking is thus an open research topic, but significant progress has been made in the last few years. The first part of this thesis explores generic tracking, where nothing is known about the target except for its initial location in the sequence. A specific family of generic trackers that exploit the FFT for faster tracking-by-detection is studied. Among these, the CSK tracker have recently shown obtain competitive performance at extraordinary low computational costs. Three contributions are made to this type of trackers. Firstly, a new method for learning the target appearance is proposed and shown to outperform the original method. Secondly, different color descriptors are investigated for the tracking purpose. Evaluations show that the best descriptor greatly improves the tracking performance. Thirdly, an adaptive dimensionality reduction technique is proposed, which adaptively chooses the most important feature combinations to use. This technique significantly reduces the computational cost of the tracking task. Extensive evaluations show that the proposed tracker outperform state-of-the-art methods in literature, while operating at several times higher frame rate. In the second part of this thesis, the proposed generic tracking method is applied to human tracking in surveillance applications. A causal framework is constructed, that automatically detects and tracks humans in the scene. The system fuses information from generic tracking and state-of-the-art object detection in a Bayesian filtering framework. In addition, the system incorporates the identification and tracking of specific human parts to achieve better robustness and performance. Tracking results are demonstrated on a real-world benchmark sequence.
APA, Harvard, Vancouver, ISO, and other styles
2

Wessler, Mike. "A modular visual tracking system." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Klein, Georg. "Visual tracking for augmented reality." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Salama, Gouda Ismail Mohamed. "Monocular and Binocular Visual Tracking." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/37179.

Full text
Abstract:
Visual tracking is one of the most important applications of computer vision. Several tracking systems have been developed which either focus mainly on the tracking of targets moving on a plane, or attempt to reduce the 3-dimensional tracking problem to the tracking of a set of characteristic points of the target. These approaches are seriously handicapped in complex visual situations, particularly those involving significant perspective, textures, repeating patterns, or occlusion. This dissertation describes a new approach to visual tracking for monocular and binocular image sequences, and for both passive and active cameras. The method combines Kalman-type prediction with steepest-descent search for correspondences, using 2-dimensional affine mappings between images. This approach differs significantly from many recent tracking systems, which emphasize the recovery of 3-dimensional motion and/or structure of objects in the scene. We argue that 2-dimensional area-based matching is sufficient in many situations of interest, and we present experimental results with real image sequences to illustrate the efficacy of this approach. Image matching between two images is a simple one to one mapping, if there is no occlusion. In the presence of occlusion wrong matching is inevitable. Few approaches have been developed to address this issue. This dissertation considers the effect of occlusion on tracking a moving object for both monocular and binocular image sequences. The visual tracking system described here attempts to detect occlusion based on the residual error computed by the matching method. If the residual matching error exceeds a user-defined threshold, this means that the tracked object may be occluded by another object. When occlusion is detected, tracking continues with the predicted locations based on Kalman filtering. This serves as a predictor of the target position until it reemerges from the occlusion again. Although the method uses a constant image velocity Kalman filtering, it has been shown to function reasonably well in a non-constant velocity situation. Experimental results show that tracking can be maintained during periods of substantial occlusion. The area-based approach to image matching often involves correlation-based comparisons between images, and this requires the specification of a size for the correlation windows. Accordingly, a new approach based on moment invariants was developed to select window size adaptively. This approach is based on the sudden increasing or decreasing in the first Maitra moment invariant. We applied a robust regression model to smooth the first Maitra moment invariant to make the method robust against noise. This dissertation also considers the effect of spatial quantization on several moment invariants. Of particular interest are the affine moment invariants, which have emerged, in recent years as a useful tool for image reconstruction, image registration, and recognition of deformed objects. Traditional analysis assumes moments and moment invariants for images that are defined in the continuous domain. Quantization of the image plane is necessary, because otherwise the image cannot be processed digitally. Image acquisition by a digital system imposes spatial and intensity quantization that, in turn, introduce errors into moment and invariant computations. This dissertation also derives expressions for quantization-induced error in several important cases. Although it considers spatial quantization only, this represents an important extension of work by other researchers. A mathematical theory for a visual tracking approach of a moving object is presented in this dissertation. This approach can track a moving object in an image sequence where the camera is passive, and when the camera is actively controlled. The algorithm used here is computationally cheap and suitable for real-time implementation. We implemented the proposed method on an active vision system, and carried out experiments of monocular and binocular tracking for various kinds of objects in different environments. These experiments demonstrated that very good performance using real images for fairly complicated situations.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Dehlin, Carl. "Visual Tracking Using Stereo Images." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153776.

Full text
Abstract:
Visual tracking concerns the problem of following an arbitrary object in a video sequence. In this thesis, we examine how to use stereo images to extend existing visual tracking algorithms, which methods exists to obtain information from stereo images, and how the results change as the parameters to each tracker vary. For this purpose, four abstract approaches are identified, with five distinct implementations. Each tracker implementation is an extension of a baseline algorithm, MOSSE. The free parameters of each model are optimized with respect to two different evaluation strategies called nor- and wir-tests, and four different objective functions, which are then fixed when comparing the models against each other. The results are created on single target tracks extracted from the KITTI tracking dataset, and the optimization results show that none of the objective functions are sensitive to the exposed parameters under the joint selection of model and dataset. The evaluation results also shows that none of the extensions improve the results of the baseline tracker.
APA, Harvard, Vancouver, ISO, and other styles
6

Salti, Samuele <1982&gt. "On-line adaptive visual tracking." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3735/1/samuele_salti_tesi.pdf.

Full text
Abstract:
Visual tracking is the problem of estimating some variables related to a target given a video sequence depicting the target. Visual tracking is key to the automation of many tasks, such as visual surveillance, robot or vehicle autonomous navigation, automatic video indexing in multimedia databases. Despite many years of research, long term tracking in real world scenarios for generic targets is still unaccomplished. The main contribution of this thesis is the definition of effective algorithms that can foster a general solution to visual tracking by letting the tracker adapt to mutating working conditions. In particular, we propose to adapt two crucial components of visual trackers: the transition model and the appearance model. The less general but widespread case of tracking from a static camera is also considered and a novel change detection algorithm robust to sudden illumination changes is proposed. Based on this, a principled adaptive framework to model the interaction between Bayesian change detection and recursive Bayesian trackers is introduced. Finally, the problem of automatic tracker initialization is considered. In particular, a novel solution for categorization of 3D data is presented. The novel category recognition algorithm is based on a novel 3D descriptors that is shown to achieve state of the art performances in several applications of surface matching.
APA, Harvard, Vancouver, ISO, and other styles
7

Salti, Samuele <1982&gt. "On-line adaptive visual tracking." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3735/.

Full text
Abstract:
Visual tracking is the problem of estimating some variables related to a target given a video sequence depicting the target. Visual tracking is key to the automation of many tasks, such as visual surveillance, robot or vehicle autonomous navigation, automatic video indexing in multimedia databases. Despite many years of research, long term tracking in real world scenarios for generic targets is still unaccomplished. The main contribution of this thesis is the definition of effective algorithms that can foster a general solution to visual tracking by letting the tracker adapt to mutating working conditions. In particular, we propose to adapt two crucial components of visual trackers: the transition model and the appearance model. The less general but widespread case of tracking from a static camera is also considered and a novel change detection algorithm robust to sudden illumination changes is proposed. Based on this, a principled adaptive framework to model the interaction between Bayesian change detection and recursive Bayesian trackers is introduced. Finally, the problem of automatic tracker initialization is considered. In particular, a novel solution for categorization of 3D data is presented. The novel category recognition algorithm is based on a novel 3D descriptors that is shown to achieve state of the art performances in several applications of surface matching.
APA, Harvard, Vancouver, ISO, and other styles
8

Delabarre, Bertrand. "Contributions to dense visual tracking and visual servoing using robust similarity criteria." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S124/document.

Full text
Abstract:
Dans cette thèse, nous traitons les problèmes de suivi visuel et d'asservissement visuel, qui sont des thèmes essentiels dans le domaine de la vision par ordinateur. La plupart des techniques de suivi et d'asservissement visuel présentes dans la littérature se basent sur des primitives géométriques extraites dans les images pour estimer le mouvement présent dans la séquence. Un problème inhérent à ce type de méthode est le fait de devoir extraire et mettre en correspondance des primitives à chaque nouvelle image avant de pouvoir estimer un déplacement. Afin d'éviter cette couche algorithmique et de considérer plus d'information visuelle, de récentes approches ont proposé d'utiliser directement la totalité des informations fournies par l'image. Ces algorithmes, alors qualifiés de directs, se basent pour la plupart sur l'observation des intensités lumineuses de chaque pixel de l'image. Mais ceci a pour effet de limiter le domaine d'utilisation de ces approches, car ce critère de comparaison est très sensibles aux perturbations de la scène (telles que les variations de luminosité ou les occultations). Pour régler ces problèmes nous proposons de nous baser sur des travaux récents qui ont montré que des mesures de similarité comme la somme des variances conditionnelles ou l'information mutuelle permettaient d'accroître la robustesse des approches directes dans des conditions perturbées. Nous proposons alors plusieurs algorithmes de suivi et d'asservissement visuels directs qui utilisent ces fonctions de similarité afin d'estimer le mouvement présents dans des séquences d'images et de contrôler un robot grâce aux informations fournies par une caméra. Ces différentes méthodes sont alors validées et analysées dans différentes conditions qui viennent démontrer leur efficacité
In this document, we address the visual tracking and visual servoing problems. They are crucial thematics in the domain of computer and robot vision. Most of these techniques use geometrical primitives extracted from the images in order to estimate a motion from an image sequences. But using geometrical features means having to extract and match them at each new image before performing the tracking or servoing process. In order to get rid of this algorithmic step, recent approaches have proposed to use directly the information provided by the whole image instead of extracting geometrical primitives. Most of these algorithms, referred to as direct techniques, are based on the luminance values of every pixel in the image. But this strategy limits their use, since the criteria is very sensitive to scene perturbations such as luminosity shifts or occlusions. To overcome this problem, we propose in this document to use robust similarity measures, the sum of conditional variance and the mutual information, in order to perform robust direct visual tracking and visual servoing processes. Several algorithms are then proposed that are based on these criteria in order to be robust to scene perturbations. These different methods are tested and analyzed in several setups where perturbations occur which allows to demonstrate their efficiency
APA, Harvard, Vancouver, ISO, and other styles
9

Arslan, Ali Erkin. "Visual Tracking With Group Motion Approach." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/4/1056100/index.pdf.

Full text
Abstract:
An algorithm for tracking single visual targets is developed in this study. Feature detection is the necessary and appropriate image processing technique for this algorithm. The main point of this approach is to use the data supplied by the feature detection as the observation from a group of targets having similar motion dynamics. Therefore a single visual target is regarded as a group of multiple targets. Accurate data association and state estimation under clutter are desired for this application similar to other multi-target tracking applications. The group tracking approach is used with the well-known probabilistic data association technique to cope with data association and estimation problems. The applicability of this method particularly for visual tracking and for other cases is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Biwen. "Visual Tracking with Deep Learning : Automatic tracking of farm animals." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240086.

Full text
Abstract:
Automatic tracking and video of surveillance on a farm could help to support farm management. In this project, an automated detection system is used to detect sows in surveillance videos. This system is based upon deep learning and computer vision methods. In order to minimize disk storage and to meet the network requirements necessary to achieve the real-performance, tracking in compressed video streams is essential. The proposed system uses a Discriminative Correlation Filter (DCF) as a classifier to detect targets. The tracking model is updated by training the classifier with online learning methods. Compression technology encodes the video data, thus reducing both the bit rates at which video signals are transmitted and helping the video transmission better adapt to the limited network bandwidth. However, compression may reduce the image quality of the videos the precision of our tracking may decrease. Hence, we conducted a performance evaluation of existing visual tracking algorithms on video sequences with quality degradation due to various compression parameters (encoders, target bitrate, rate control model, and Group of Pictures (GOP) size). The ultimate goal of video compression is to realize a tracking system with equal performance, but requiring fewer network resources. The proposed tracking algorithm successfully tracks each sow in consecutive frames in most cases. The performance of our tracker was benchmarked against two state-of-art tracking algorithms: Siamese Fully-Convolutional (FC) and Efficient Convolution Operators (ECO). The performance evaluation result shows our proposed tracker has similar performance to both Siamese FC and ECO. In comparison with the original tracker, the proposed tracker achieved similar tracking performance, while requiring much less storage and generating a lower bitrate when the video was compressed with appropriate parameters. However, the system is far slower than needed for real-time tracking due to high computational complexity; therefore, more optimal methods to update the tracking model will be needed to achieve real-time tracking.
Automatisk spårning av övervakning i gårdens område kan bidra till att stödja jordbruket management. I detta projekt till ett automatiserat system för upptäckt upptäcka suggor från övervaknings filmer kommer att utformas med djupa lärande och datorseende metoder. Av hänsyn till Diskhantering och tid och hastighet Krav över nätverket för att uppnå realtidsscenarier i framtiden är spårning i komprimerade videoströmmar är avgörande. Det föreslagna systemet i detta projekt skulle använda en DCF (diskriminerande korrelationsfilter) som en klassificerare att upptäcka mål. Spårningen modell kommer att uppdateras genom att utbilda klassificeraren med online inlärningsmetoder. Compression teknik kodar videodata och minskar bithastigheter där videosignaler sänds kan hjälpa videoöverföring anpassar bättre i begränsad nätverk. det kan dock reducera bildkvaliteten på videoklipp och leder exakt hastighet av vårt spårningssystem för att minska. Därför undersöker vi utvärderingen av prestanda av befintlig visuella spårningsalgoritmer på videosekvenser Det ultimata målet med videokomprimering är att bidra till att bygga ett spårningssystem med samma prestanda men kräver färre nätverksresurser. Den föreslagna spårning algoritm spår framgångsrikt varje sugga i konsekutiva ramar i de flesta fall prestanda vår tracker var jämföras med två state-of-art spårning algoritmer:. Siamese Fully-Convolutional (FC) och Efficient Convolution Operators (ECO) utvärdering av prestanda Resultatet visar vår föreslagna tracker blir liknande prestanda med Siamese FC och ECO. I jämförelse med den ursprungliga spårningen uppnådde den föreslagna spårningen liknande spårningseffektivitet, samtidigt som det krävde mycket mindre lagring och alstra en lägre bitrate när videon komprimerades med lämpliga parametrar. Systemet är mycket långsammare än det behövs för spårning i realtid på grund av hög beräkningskomplexitet; därför behövs mer optimala metoder för att uppdatera spårningsmodellen för att uppnå realtidsspårning.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Visual tracking"

1

Lu, Huchuan, and Dong Wang. Online Visual Tracking. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-0469-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Panin, Giorgio. Model-Based Visual Tracking. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2011. http://dx.doi.org/10.1002/9780470943922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MacCormick, John. Stochastic Algorithms for Visual Tracking. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0679-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Panin, Giorgio. Model-based visual tracking: The OpenTL framework. Hoboken, N.J: Wiley, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xing, Weiwei, Weibin Liu, Jun Wang, Shunli Zhang, Lihui Wang, Yuxiang Yang, and Bowen Song. Visual Object Tracking from Correlation Filter to Deep Learning. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6242-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Campbell, Philip E. Visual analysis of a radio frequency tracking system for virtual environments. Monterey, Calif: Naval Postgraduate School, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eye tracking methodology: Theory and practice. 2nd ed. London: Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Essig, Kai. Vision-based image retrieval (VBIR): A new eye-tracking based approach to efficient and intuitive image retrieval. Saarbrücken: VDM Verlag Dr. Müller, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Vincent C. E. Eye mouse system: Mouse control technique for detecting and tracking of eyes in visual images. Manchester: UMIST, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gamito, Pedro Santos Pinto, and Pedro Joel Rosa. I see me, you see me: Inferring cognitive and emotional processes from gazing behaviour. Newcastle Upon Tyne: Cambridge Scholars Publishing, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual tracking"

1

Clark, Uraina. "Visual Tracking." In Encyclopedia of Clinical Neuropsychology, 2645–47. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-0-387-79948-3_1415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Clark, Uraina. "Visual Tracking." In Encyclopedia of Clinical Neuropsychology, 1–3. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56782-2_1415-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clark, Uraina S. "Visual Tracking." In Encyclopedia of Clinical Neuropsychology, 3642–44. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-57111-9_1415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Marchand, Eric. "Visual Tracking." In Encyclopedia of Robotics, 1–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-642-41610-1_102-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Duchowski, Andrew T. "Visual Attention." In Eye Tracking Methodology, 3–13. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57883-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Duchowski, Andrew T. "Visual Psychophysics." In Eye Tracking Methodology, 29–38. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57883-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Huchuan, and Dong Wang. "Correlation Tracking." In Online Visual Tracking, 87–100. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-0469-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lu, Huchuan, and Dong Wang. "Tracking by Segmentation." In Online Visual Tracking, 61–85. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-0469-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Huchuan, and Dong Wang. "Introduction to Visual Tracking." In Online Visual Tracking, 1–10. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-0469-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Huchuan, and Dong Wang. "Visual Tracking Based on Sparse Representation." In Online Visual Tracking, 11–25. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-0469-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual tracking"

1

Roberts, J., and D. Charnley. "Attentive Visual Tracking." In British Machine Vision Conference 1993. British Machine Vision Association, 1993. http://dx.doi.org/10.5244/c.7.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wavering, Albert J., and Ronald Lumia. "Predictive visual tracking." In Optical Tools for Manufacturing and Advanced Automation, edited by David P. Casasent. SPIE, 1993. http://dx.doi.org/10.1117/12.150188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Shu, Huchuan Lu, and Guang Yang. "Complementary Visual Tracking." In 2011 18th IEEE International Conference on Image Processing (ICIP 2011). IEEE, 2011. http://dx.doi.org/10.1109/icip.2011.6116555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kwon, Junseok, and Kyoung Mu Lee. "Visual tracking decomposition." In 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2010. http://dx.doi.org/10.1109/cvpr.2010.5539821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Advanced Visual Tracking." In Third IEEE and ACM International Symposium on Mixed and Augmented Reality. IEEE, 2004. http://dx.doi.org/10.1109/ismar.2004.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, Xing, Yifan Bai, Yongchao Zheng, Dahu Shi, and Yihong Gong. "Autoregressive Visual Tracking." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sarma, Prathusha K., and Tarunraj Singh. "A mixture distribution for visual foraging." In ETRA '14: Eye Tracking Research and Applications. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2578153.2578210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Ruofei, Eric Lee, and Amitabh Varshney. "Tracking-Tolerant Visual Cryptography." In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2019. http://dx.doi.org/10.1109/vr.2019.8797924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Y., S. Worrall, and A. M. Kondoz. "Depth assisted visual tracking." In 2009 10th Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS). IEEE, 2009. http://dx.doi.org/10.1109/wiamis.2009.5031456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Papanikolopoulos, N., P. K. Khosla, and T. Kanade. "Adaptive Robotic Visual Tracking." In 1991 American Control Conference. IEEE, 1991. http://dx.doi.org/10.23919/acc.1991.4791520.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Visual tracking"

1

Basu, Saikat, Malcolm Stagg, Robert DiBiano, Manohar Karki, Supratik Mukhopadhyay, and Jerry Weltman. An Agile Framework for Real-Time Visual Tracking in Videos. Fort Belvoir, VA: Defense Technical Information Center, September 2012. http://dx.doi.org/10.21236/ada581034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tannenbaum, Allen R. Distributed Systems for Problems in Robust Control and Visual Tracking. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada387787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bennur, Shubhapriya. Consumers Visual Search Behavior on the Websites: An Eye Tracking Approach. Ames: Iowa State University, Digital Repository, November 2016. http://dx.doi.org/10.31274/itaa_proceedings-180814-1467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tannenbaum, Allen R. Geometric PDE's and Invariants for Problems in Visual Control Tracking and Optimization. Fort Belvoir, VA: Defense Technical Information Center, January 2005. http://dx.doi.org/10.21236/ada428955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, J., W. E. Dixon, D. M. Dawson, and V. K. Chitrakaran. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera. Fort Belvoir, VA: Defense Technical Information Center, January 2004. http://dx.doi.org/10.21236/ada465705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dutra, Lauren M., James Nonnemaker, Nathaniel Taylor, Ashley Feld, Brian Bradfield, John Holloway, Edward (Chip) Hill, and Annice Kim. Visual Attention to Tobacco-Related Stimuli in a 3D Virtual Store. RTI Press, May 2020. http://dx.doi.org/10.3768/rtipress.2020.rr.0036.2005.

Full text
Abstract:
We used eye tracking to measure visual attention to tobacco products and pro- and anti-tobacco advertisements (pro-ads and anti-ads) during a shopping task in a three-dimensional virtual convenience store. We used eye-tracking hardware to track the percentage of fixations (number of times the eye was essentially stationary; F) and dwell time (time spent looking at an object; DT) for several categories of objects and ads for 30 adult current cigarette smokers. We used Wald F-tests to compare fixations and dwell time across categories, adjusting comparisons of ads by the number of each type of ad. Overall, unadjusted for the number of each object, participants focused significantly greater attention on snacks and drinks and tobacco products than ads (all P<0.005). Adjusting for the number of each type of ad viewed, participants devoted significantly greater visual attention to pro-ads than anti-ads or ads unrelated to tobacco (P<0.001). Visual attention for anti-ads was significantly greater when the ads were placed on the store’s external walls or hung from the ceiling than when placed on the gas pump or floor (P<0.005). In a cluttered convenience store environment, anti-ads at the point of sale have to compete with many other stimuli. Restrictions on tobacco product displays and advertisements at the point of sale could reduce the stimuli that attract smokers’ attention away from anti-ads.
APA, Harvard, Vancouver, ISO, and other styles
7

D'Amico, Angela, Christopher Kyburg, and Rowena Carlson. Software Tools for Visual and Acoustic Real-Time Tracking of Marine Mammals: Whale Identification and Logging Display (WILD). Fort Belvoir, VA: Defense Technical Information Center, November 2010. http://dx.doi.org/10.21236/ada533470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nelson, W. T., Robert S. Bolia, Chris A. Russell, Rebecca M. Morley, and Merry M. Roe. Head-Slaved Tracking in a See-Through HMD: The Effects of a Secondary Visual Monitoring Task on Performance and Workload. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada430665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Klobucar, Blaz. Urban Tree Detection in Historical Aerial Imagery of Sweden : a test in automated detection with open source Deep Learning models. Faculty of Landscape Architecture, Horticulture and Crop Production Science, Swedish University of Agricultural Sciences, 2024. http://dx.doi.org/10.54612/a.7kn4q7vikr.

Full text
Abstract:
Urban trees are a key component of the urban environment. In Sweden, ambitious goals have been expressed by authorities regarding the retention and increase of urban tree cover, aiming to mitigate climate change and provide a healthy, livable urban environment in a highly contested space. Tracking urban tree cover through remote sensing serves as an indicator of how past urban planning has succeeded in retaining trees as part of the urban fabric, and historical imagery spanning back decades for such analysis is widely available. This short study examines the viability of automated detection using open-source Deep Learning methods for long-term change detection in urban tree cover, aiming to evaluate past practices in urban planning. Results indicate that preprocessing of old imagery is necessary to enhance the detection and segmentation of urban tree cover, as the currently available training models were found to be severely lacking upon visual inspection.
APA, Harvard, Vancouver, ISO, and other styles
10

Burks, Thomas F., Victor Alchanatis, and Warren Dixon. Enhancement of Sensing Technologies for Selective Tree Fruit Identification and Targeting in Robotic Harvesting Systems. United States Department of Agriculture, October 2009. http://dx.doi.org/10.32747/2009.7591739.bard.

Full text
Abstract:
The proposed project aims to enhance tree fruit identification and targeting for robotic harvesting through the selection of appropriate sensor technology, sensor fusion, and visual servo-control approaches. These technologies will be applicable for apple, orange and grapefruit harvest, although specific sensor wavelengths may vary. The primary challenges are fruit occlusion, light variability, peel color variation with maturity, range to target, and computational requirements of image processing algorithms. There are four major development tasks in original three-year proposed study. First, spectral characteristics in the VIS/NIR (0.4-1.0 micron) will be used in conjunction with thermal data to provide accurate and robust detection of fruit in the tree canopy. Hyper-spectral image pairs will be combined to provide automatic stereo matching for accurate 3D position. Secondly, VIS/NIR/FIR (0.4-15.0 micron) spectral sensor technology will be evaluated for potential in-field on-the-tree grading of surface defect, maturity and size for selective fruit harvest. Thirdly, new adaptive Lyapunov-basedHBVS (homography-based visual servo) methods to compensate for camera uncertainty, distortion effects, and provide range to target from a single camera will be developed, simulated, and implemented on a camera testbed to prove concept. HBVS methods coupled with imagespace navigation will be implemented to provide robust target tracking. And finally, harvesting test will be conducted on the developed technologies using the University of Florida harvesting manipulator test bed. During the course of the project it was determined that the second objective was overly ambitious for the project period and effort was directed toward the other objectives. The results reflect the synergistic efforts of the three principals. The USA team has focused on citrus based approaches while the Israeli counterpart has focused on apples. The USA team has improved visual servo control through the use of a statistical-based range estimate and homography. The results have been promising as long as the target is visible. In addition, the USA team has developed improved fruit detection algorithms that are robust under light variation and can localize fruit centers for partially occluded fruit. Additionally, algorithms have been developed to fuse thermal and visible spectrum image prior to segmentation in order to evaluate the potential improvements in fruit detection. Lastly, the USA team has developed a multispectral detection approach which demonstrated fruit detection levels above 90% of non-occluded fruit. The Israel team has focused on image registration and statistical based fruit detection with post-segmentation fusion. The results of all programs have shown significant progress with increased levels of fruit detection over prior art.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography