Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Visual tracking.

Artykuły w czasopismach na temat „Visual tracking”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Visual tracking”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

ZANG, Chuantao, Yoshihide ENDO i Koichi HASHIMOTO. "2P1-D20 GPU accelerating visual tracking". Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2010 (2010): _2P1—D20_1—_2P1—D20_4. http://dx.doi.org/10.1299/jsmermd.2010._2p1-d20_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Roberts, J., i D. Charnley. "Parallel Visual Tracking". IFAC Proceedings Volumes 26, nr 1 (kwiecień 1993): 127–32. http://dx.doi.org/10.1016/s1474-6670(17)49287-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Yuan, Heng, Wen-Tao Jiang, Wan-Jun Liu i Sheng-Chong Zhang. "Visual node prediction for visual tracking". Multimedia Systems 25, nr 3 (30.01.2019): 263–72. http://dx.doi.org/10.1007/s00530-019-00603-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Lou, Jianguang, Tieniu Tan i Weiming Hu. "Visual vehicle tracking algorithm". Electronics Letters 38, nr 18 (2002): 1024. http://dx.doi.org/10.1049/el:20020692.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ming Yang, Ying Wu i Gang Hua. "Context-Aware Visual Tracking". IEEE Transactions on Pattern Analysis and Machine Intelligence 31, nr 7 (lipiec 2009): 1195–209. http://dx.doi.org/10.1109/tpami.2008.146.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Zhang, Lei, Yanjie Wang, Honghai Sun, Zhijun Yao i Shuwen He. "Robust Visual Correlation Tracking". Mathematical Problems in Engineering 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/238971.

Pełny tekst źródła
Streszczenie:
Recent years have seen greater interests in the tracking-by-detection methods in the visual object tracking, because of their excellent tracking performance. But most existing methods fix the scale which makes the trackers unreliable to handle large scale variations in complex scenes. In this paper, we decompose the tracking into target translation and scale prediction. We adopt a scale estimation approach based on the tracking-by-detection framework, develop a new model update scheme, and present a robust correlation tracking algorithm with discriminative correlation filters. The approach works by learning the translation and scale correlation filters. We obtain the target translation and scale by finding the maximum output response of the learned correlation filters and then online update the target models. Extensive experiments results on 12 challenging benchmark sequences show that the proposed tracking approach reduces the average center location error (CLE) by 6.8 pixels, significantly improves the performance by 17.5% in the average success rate (SR) and by 5.4% in the average distance precision (DP) compared to the second best one of the other five excellent existing tracking algorithms, and is robust to appearance variations introduced by scale variations, pose variations, illumination changes, partial occlusion, fast motion, rotation, and background clutter.
Style APA, Harvard, Vancouver, ISO itp.
7

Roberts, J. M., i D. Charnley. "Parallel attentive visual tracking". Engineering Applications of Artificial Intelligence 7, nr 2 (kwiecień 1994): 205–15. http://dx.doi.org/10.1016/0952-1976(94)90024-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wang, Hesheng, Yun-Hui Liu i Weidong Chen. "Uncalibrated Visual Tracking Control Without Visual Velocity". IEEE Transactions on Control Systems Technology 18, nr 6 (listopad 2010): 1359–70. http://dx.doi.org/10.1109/tcst.2010.2041457.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Shi, Liangtao, Bineng Zhong, Qihua Liang, Ning Li, Shengping Zhang i Xianxian Li. "Explicit Visual Prompts for Visual Object Tracking". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 5 (24.03.2024): 4838–46. http://dx.doi.org/10.1609/aaai.v38i5.28286.

Pełny tekst źródła
Streszczenie:
How to effectively exploit spatio-temporal information is crucial to capture target appearance changes in visual tracking. However, most deep learning-based trackers mainly focus on designing a complicated appearance model or template updating strategy, while lacking the exploitation of context between consecutive frames and thus entailing the when-and-how-to-update dilemma. To address these issues, we propose a novel explicit visual prompts framework for visual tracking, dubbed EVPTrack. Specifically, we utilize spatio-temporal tokens to propagate information between consecutive frames without focusing on updating templates. As a result, we cannot only alleviate the challenge of when-to-update, but also avoid the hyper-parameters associated with updating strategies. Then, we utilize the spatio-temporal tokens to generate explicit visual prompts that facilitate inference in the current frame. The prompts are fed into a transformer encoder together with the image tokens without additional processing. Consequently, the efficiency of our model is improved by avoiding how-to-update. In addition, we consider multi-scale information as explicit visual prompts, providing multiscale template features to enhance the EVPTrack's ability to handle target scale changes. Extensive experimental results on six benchmarks (i.e., LaSOT, LaSOText, GOT-10k, UAV123, TrackingNet, and TNL2K.) validate that our EVPTrack can achieve competitive performance at a real-time speed by effectively exploiting both spatio-temporal and multi-scale information. Code and models are available at https://github.com/GXNU-ZhongLab/EVPTrack.
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Yue, Huibin Lu i Xingwang Du. "ROAM-based visual tracking method". Journal of Physics: Conference Series 1732 (styczeń 2021): 012064. http://dx.doi.org/10.1088/1742-6596/1732/1/012064.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

BĂNICĂ, Marian Valentin, Anamaria RĂDOI i Petrișor Valentin PÂRVU. "ONBOARD VISUAL TRACKING FOR UAV’S". Scientific Journal of Silesian University of Technology. Series Transport 105 (1.12.2019): 35–48. http://dx.doi.org/10.20858/sjsutst.2019.105.4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

ZHANG, Tiansa, Chunlei HUO, Zhiqiang ZHOU i Bo WANG. "Faster-ADNet for Visual Tracking". IEICE Transactions on Information and Systems E102.D, nr 3 (1.03.2019): 684–87. http://dx.doi.org/10.1587/transinf.2018edl8214.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Wedel, Michel, i Rik Pieters. "Eye Tracking for Visual Marketing". Foundations and Trends® in Marketing 1, nr 4 (2006): 231–320. http://dx.doi.org/10.1561/1700000011.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Chunhua Shen, Junae Kim i Hanzi Wang. "Generalized Kernel-Based Visual Tracking". IEEE Transactions on Circuits and Systems for Video Technology 20, nr 1 (styczeń 2010): 119–30. http://dx.doi.org/10.1109/tcsvt.2009.2031393.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Munich, M. E., i P. Perona. "Visual identification by signature tracking". IEEE Transactions on Pattern Analysis and Machine Intelligence 25, nr 2 (luty 2003): 200–217. http://dx.doi.org/10.1109/tpami.2003.1177152.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Ma, Bo, Lianghua Huang, Jianbing Shen, Ling Shao, Ming-Hsuan Yang i Fatih Porikli. "Visual Tracking Under Motion Blur". IEEE Transactions on Image Processing 25, nr 12 (grudzień 2016): 5867–76. http://dx.doi.org/10.1109/tip.2016.2615812.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Chen, Kai, i Wenbing Tao. "Convolutional Regression for Visual Tracking". IEEE Transactions on Image Processing 27, nr 7 (lipiec 2018): 3611–20. http://dx.doi.org/10.1109/tip.2018.2819362.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Pei, Zhijun, i Lei Han. "Visual Tracking Using L2 Minimization". MATEC Web of Conferences 61 (2016): 02020. http://dx.doi.org/10.1051/matecconf/20166102020.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Zhou, Jiawei, i Shahram Payandeh. "Visual Tracking of Laparoscopic Instruments". Journal of Automation and Control Engineering 2, nr 3 (2014): 234–41. http://dx.doi.org/10.12720/joace.2.3.234-241.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Yao, Rui, Guosheng Lin, Chunhua Shen, Yanning Zhang i Qinfeng Shi. "Semantics-Aware Visual Object Tracking". IEEE Transactions on Circuits and Systems for Video Technology 29, nr 6 (czerwiec 2019): 1687–700. http://dx.doi.org/10.1109/tcsvt.2018.2848358.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Kim, Minyoung. "Correlation-based incremental visual tracking". Pattern Recognition 45, nr 3 (marzec 2012): 1050–60. http://dx.doi.org/10.1016/j.patcog.2011.08.026.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Li, Zhidong, Weihong Wang, Yang Wang, Fang Chen i Yi Wang. "Visual tracking by proto-objects". Pattern Recognition 46, nr 8 (sierpień 2013): 2187–201. http://dx.doi.org/10.1016/j.patcog.2013.01.020.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Sergeant, D., R. Boyle i M. Forbes. "Computer visual tracking of poultry". Computers and Electronics in Agriculture 21, nr 1 (wrzesień 1998): 1–18. http://dx.doi.org/10.1016/s0168-1699(98)00025-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Bernardino, Alexandre, i José Santos-Victor. "Visual behaviours for binocular tracking". Robotics and Autonomous Systems 25, nr 3-4 (listopad 1998): 137–46. http://dx.doi.org/10.1016/s0921-8890(98)00043-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Tannenbaum, Allen, Anthony Yezzi i Alex Goldstein. "Visual Tracking and Object Recognition". IFAC Proceedings Volumes 34, nr 6 (lipiec 2001): 1539–42. http://dx.doi.org/10.1016/s1474-6670(17)35408-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Shao, Y., J. E. W. Mayhew i Y. Zheng. "Model-driven active visual tracking". Real-Time Imaging 4, nr 5 (styczeń 1998): 349–59. http://dx.doi.org/10.1016/s1077-2014(98)90004-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Yun, Xiao, i Gang Xiao. "Spiral visual and motional tracking". Neurocomputing 249 (sierpień 2017): 117–27. http://dx.doi.org/10.1016/j.neucom.2017.03.070.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Xu, Weicun, Qingjie Zhao i Dongbing Gu. "Fragmentation handling for visual tracking". Signal, Image and Video Processing 8, nr 8 (28.11.2012): 1639–49. http://dx.doi.org/10.1007/s11760-012-0406-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Mei, Xue, Tianzhu Zhang, Huchuan Lu, Ming-Hsuan Yang, Kyoung Mu Lee i Horst Bischof. "Special Issue on Visual Tracking". Computer Vision and Image Understanding 153 (grudzień 2016): 1–2. http://dx.doi.org/10.1016/j.cviu.2016.11.001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Chli, Margarita, i Andrew J. Davison. "Active matching for visual tracking". Robotics and Autonomous Systems 57, nr 12 (grudzień 2009): 1173–87. http://dx.doi.org/10.1016/j.robot.2009.07.010.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Zhou, Yu, Xiang Bai, Wenyu Liu i Longin Jan Latecki. "Similarity Fusion for Visual Tracking". International Journal of Computer Vision 118, nr 3 (25.01.2016): 337–63. http://dx.doi.org/10.1007/s11263-015-0879-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

王, 楠洋. "A Review of Visual Tracking". Computer Science and Application 08, nr 01 (2018): 35–42. http://dx.doi.org/10.12677/csa.2018.81006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Dai, Bo, Zhiqiang Hou, Wangsheng Yu, Feng Zhu, Xin Wang i Zefenfen Jin. "Visual tracking via ensemble autoencoder". IET Image Processing 12, nr 7 (1.07.2018): 1214–21. http://dx.doi.org/10.1049/iet-ipr.2017.0486.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

BANDOPADHAY, AMIT, i DANA H. BALLARD. "Egomotion perception using visual tracking". Computational Intelligence 7, nr 1 (luty 1991): 39–47. http://dx.doi.org/10.1111/j.1467-8640.1991.tb00333.x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Yokomichi, Masahiro, i Yuki Nakagama. "Multimodal MSEPF for visual tracking". Artificial Life and Robotics 17, nr 2 (28.08.2012): 257–62. http://dx.doi.org/10.1007/s10015-012-0050-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Quinlan, P. "Visual tracking and feature binding". Ophthalmic and Physiological Optics 14, nr 4 (październik 1994): 439. http://dx.doi.org/10.1016/0275-5408(94)90190-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Banu, Rubeena, i M. H. Sidram. "Window Based Min-Max Feature Extraction for Visual Object Tracking". Indian Journal Of Science And Technology 15, nr 40 (27.10.2022): 2047–55. http://dx.doi.org/10.17485/ijst/v15i40.1395.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Peng, Chao, Danhua Cao, Yubin Wu i Qun Yang. "Robot visual guide with Fourier-Mellin based visual tracking". Frontiers of Optoelectronics 12, nr 4 (8.06.2019): 413–21. http://dx.doi.org/10.1007/s12200-019-0862-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Vihlman, Mikko, i Arto Visala. "Optical Flow in Deep Visual Tracking". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 12112–19. http://dx.doi.org/10.1609/aaai.v34i07.6890.

Pełny tekst źródła
Streszczenie:
Single-target tracking of generic objects is a difficult task since a trained tracker is given information present only in the first frame of a video. In recent years, increasingly many trackers have been based on deep neural networks that learn generic features relevant for tracking. This paper argues that deep architectures are often fit to learn implicit representations of optical flow. Optical flow is intuitively useful for tracking, but most deep trackers must learn it implicitly. This paper is among the first to study the role of optical flow in deep visual tracking. The architecture of a typical tracker is modified to reveal the presence of implicit representations of optical flow and to assess the effect of using the flow information more explicitly. The results show that the considered network learns implicitly an effective representation of optical flow. The implicit representation can be replaced by an explicit flow input without a notable effect on performance. Using the implicit and explicit representations at the same time does not improve tracking accuracy. The explicit flow input could allow constructing lighter networks for tracking.
Style APA, Harvard, Vancouver, ISO itp.
40

Choi, Janghoon. "Global Context Attention for Robust Visual Tracking". Sensors 23, nr 5 (1.03.2023): 2695. http://dx.doi.org/10.3390/s23052695.

Pełny tekst źródła
Streszczenie:
Although there have been recent advances in Siamese-network-based visual tracking methods where they show high performance metrics on numerous large-scale visual tracking benchmarks, persistent challenges regarding the distractor objects with similar appearances to the target object still remain. To address these aforementioned issues, we propose a novel global context attention module for visual tracking, where the proposed module can extract and summarize the holistic global scene information to modulate the target embedding for improved discriminability and robustness. Our global context attention module receives a global feature correlation map to elicit the contextual information from a given scene and generates the channel and spatial attention weights to modulate the target embedding to focus on the relevant feature channels and spatial parts of the target object. Our proposed tracking algorithm is tested on large-scale visual tracking datasets, where we show improved performance compared to the baseline tracking algorithm while achieving competitive performance with real-time speed. Additional ablation experiments also validate the effectiveness of the proposed module, where our tracking algorithm shows improvements in various challenging attributes of visual tracking.
Style APA, Harvard, Vancouver, ISO itp.
41

WANG, DONG, GANG YANG i HUCHUAN LU. "TRI-TRACKING: COMBINING THREE INDEPENDENT VIEWS FOR ROBUST VISUAL TRACKING". International Journal of Image and Graphics 12, nr 03 (lipiec 2012): 1250021. http://dx.doi.org/10.1142/s0219467812500210.

Pełny tekst źródła
Streszczenie:
Robust tracking is a challenging problem, due to intrinsic appearance variability of objects caused by in-plane or out-plane rotation and extrinsic factors change such as illumination, occlusion, background clutter and local blur. In this paper, we present a novel tri-tracking framework combining different views (different models using independent features) for robust object tracking. This new tracking framework exploits a hybrid discriminative generative model based on online semi-supervised learning. We only need the first frame for parameters initialization, and then the tracking process is automatic in the remaining frames, with updating the model online to capture the changes of both object appearance and background. There are three main contributions in our tri-tracking approach. First, we propose a tracking framework for combining generative model and discriminative model, together with different cues that complement each other. Second, by introducing a third tracker, we provide a solution to the problem that it is difficult to combine two classification results in co-training framework when they are opposite. Third, we propose a principle way for combing different views, which based on their Discriminative power. We conduct experiments on some challenging videos, the results from which demonstrate that the proposed tri-tracking framework is robust.
Style APA, Harvard, Vancouver, ISO itp.
42

Ruiz-Alzola, Juan, Carlos Alberola-López i Jose-Ramón Casar Corredera. "Model-based stereo-visual tracking: Covariance analysis and tracking schemes". Signal Processing 80, nr 1 (styczeń 2000): 23–43. http://dx.doi.org/10.1016/s0165-1684(99)00109-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Rao, Jinjun, Kai Xu, Jinbo Chen, Jingtao Lei, Zhen Zhang, Qiuyu Zhang, Wojciech Giernacki i Mei Liu. "Sea-Surface Target Visual Tracking with a Multi-Camera Cooperation Approach". Sensors 22, nr 2 (17.01.2022): 693. http://dx.doi.org/10.3390/s22020693.

Pełny tekst źródła
Streszczenie:
Cameras are widely used in the detection and tracking of moving targets. Compared to target visual tracking using a single camera, cooperative tracking based on multiple cameras has advantages including wider visual field, higher tracking reliability, higher precision of target positioning and higher possibility of multiple-target visual tracking. With vast ocean and sea surfaces, it is a challenge using multiple cameras to work together to achieve specific target tracking and detection, and it will have a wide range of application prospects. According to the characteristics of sea-surface moving targets and visual images, this study proposed and designed a sea-surface moving-target visual detection and tracking system with a multi-camera cooperation approach. In the system, the technologies of moving target detection, tracking, and matching are studied, and the strategy to coordinate multi-camera cooperation is proposed. The comprehensive experiments of cooperative sea-surface moving-target visual tracking show that the method used in this study has improved performance compared with contrapositive methods, and the proposed system can meet the needs of multi-camera cooperative visual tracking of moving targets on the sea surface.
Style APA, Harvard, Vancouver, ISO itp.
44

Kurzhals, Kuno, Brian Fisher, Michael Burch i Daniel Weiskopf. "Eye tracking evaluation of visual analytics". Information Visualization 15, nr 4 (26.07.2016): 340–58. http://dx.doi.org/10.1177/1473871615609787.

Pełny tekst źródła
Streszczenie:
The application of eye tracking for the evaluation of humans’ viewing behavior is a common approach in psychological research. So far, the use of this technique for the evaluation of visual analytics and visualization is less prominent. We investigate recent scientific publications from the main visualization and visual analytics conferences and journals, as well as related research fields that include an evaluation by eye tracking. Furthermore, we provide an overview of evaluation goals that can be achieved by eye tracking and state-of-the-art analysis techniques for eye tracking data. Ideally, visual analytics leads to a mixed-initiative cognitive system where the mechanism of distribution is the interaction of the user with the visualization environment. Therefore, we also include a discussion of cognitive approaches and models to include the user in the evaluation process. Based on our review of the current use of eye tracking evaluation in our field and the cognitive theory, we propose directions for future research on evaluation methodology, leading to the grand challenge of developing an evaluation approach to the mixed-initiative cognitive system of visual analytics.
Style APA, Harvard, Vancouver, ISO itp.
45

Chen, Yuantao, Weihong Xu, Fangjun Kuang i Shangbing Gao. "The Research and Application of Visual Saliency and Adaptive Support Vector Machine in Target Tracking Field". Computational and Mathematical Methods in Medicine 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/925341.

Pełny tekst źródła
Streszczenie:
The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking’s accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper’s algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target’s saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.
Style APA, Harvard, Vancouver, ISO itp.
46

Yu, Qianqian, Keqi Fan, Yiyang Wang i Yuhui Zheng. "Faster MDNet for Visual Object Tracking". Applied Sciences 12, nr 5 (23.02.2022): 2336. http://dx.doi.org/10.3390/app12052336.

Pełny tekst źródła
Streszczenie:
With the rapid development of deep learning techniques, new breakthroughs have been made in deep learning-based object tracking methods. Although many approaches have achieved state-of-the-art results, existing methods still cannot fully satisfy practical needs. A robust tracker should perform well in three aspects: tracking accuracy, speed, and resource consumption. Considering this notion, we propose a novel model, Faster MDNet, to strike a better balance among these factors. To improve the tracking accuracy, a channel attention module is introduced to our method. We also design domain adaptation components to obtain more generic features. Simultaneously, we implement an adaptive, spatial pyramid pooling layer for reducing model complexity and accelerating the tracking speed. The experiments illustrate the promising performance of our tracker on OTB100, VOT2018, TrackingNet, UAV123, and NfS.
Style APA, Harvard, Vancouver, ISO itp.
47

Beutter, B. R., J. Lorenceau i L. S. Stone. "Visual Coherence Affects Smooth Pursuit". Perception 25, nr 1_suppl (sierpień 1996): 10. http://dx.doi.org/10.1068/v96l0202.

Pełny tekst źródła
Streszczenie:
For four subjects (one naive), we measured pursuit of a line-figure diamond moving along an elliptical path behind an invisible X-shaped aperture under two conditions. The diamond's corners were occluded and only four moving line segments were visible over the background (38 cd m−2). At low segment luminance (44 cd m−2), the percept is largely a coherently moving diamond. At high luminance (108 cd m−2), the percept is largely four independently moving segments. Along with this perceptual effect, there were parallel changes in pursuit. In the low-contrast condition, pursuit was more related to object motion. A \chi2 analysis showed ( p>0.05) that for 98% of trials subjects were more likely tracking the object than the segments, for 29% of trials one could not reject the hypothesis that subjects were tracking the object and not the segments, and for 100% of trials one could reject the hypothesis that subjects were tracking the segments and not the object. Conversely, in the high-contrast condition, pursuit appeared more related to segment motion. For 66% of trials subjects were more likely tracking the segments than the object; for 94% of trials one could reject the hypothesis that subjects were tracking the object and not the segments; and for 13% of trials one could not reject the hypothesis that subjects were tracking the segments and not the object. These results suggest that pursuit is driven by the same object-motion signal as perception, rather than by simple retinal image motion.
Style APA, Harvard, Vancouver, ISO itp.
48

Zhen, Xinxin, Shumin Fei, Yinmin Wang i Wei Du. "A Visual Object Tracking Algorithm Based on Improved TLD". Algorithms 13, nr 1 (1.01.2020): 15. http://dx.doi.org/10.3390/a13010015.

Pełny tekst źródła
Streszczenie:
Visual object tracking is an important research topic in the field of computer vision. Tracking–learning–detection (TLD) decomposes the tracking problem into three modules—tracking, learning, and detection—which provides effective ideas for solving the tracking problem. In order to improve the tracking performance of the TLD tracker, three improvements are proposed in this paper. The built-in tracking module is replaced with a kernelized correlation filter (KCF) algorithm based on the histogram of oriented gradient (HOG) descriptor in the tracking module. Failure detection is added for the response of KCF to identify whether KCF loses the target. A more specific detection area of the detection module is obtained through the estimated location provided by the tracking module. With the above operations, the scanning area of object detection is reduced, and a full frame search is required in the detection module if objects fails to be tracked in the tracking module. Comparative experiments were conducted on the object tracking benchmark (OTB) and the results showed that the tracking speed and accuracy was improved. Further, the TLD tracker performed better in different challenging scenarios with the proposed method, such as motion blur, occlusion, and environmental changes. Moreover, the improved TLD achieved outstanding tracking performance compared with common tracking algorithms.
Style APA, Harvard, Vancouver, ISO itp.
49

Yao, Yeboah, Zhuliang Yu i Wei Wu. "Robust and Persistent Visual Tracking-by-Detection for Robotic Vision Systems". International Journal of Machine Learning and Computing 6, nr 3 (czerwiec 2016): 196–204. http://dx.doi.org/10.18178/ijmlc.2016.6.3.598.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Tung, Tony, i Takashi Matsuyama. "Visual Tracking Using Multimodal Particle Filter". International Journal of Natural Computing Research 4, nr 3 (lipiec 2014): 69–84. http://dx.doi.org/10.4018/ijncr.2014070104.

Pełny tekst źródła
Streszczenie:
Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii