Добірка наукової літератури з теми "MULTI-CUE OBJECT"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "MULTI-CUE OBJECT".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "MULTI-CUE OBJECT"

1

Hu, Mengjie, Zhen Liu, Jingyu Zhang, and Guangjun Zhang. "Robust object tracking via multi-cue fusion." Signal Processing 139 (October 2017): 86–95. http://dx.doi.org/10.1016/j.sigpro.2017.04.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gu, Lichuan, Chengji Wang, Jinqin Zhong, Jianxiao Liu, and Juan Wang. "Multi-cue Integration Object Tracking Based on Blocking." International Journal of Security and Its Applications 8, no. 3 (May 31, 2014): 309–24. http://dx.doi.org/10.14257/ijsia.2014.8.3.32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Jiangtao, and Jingyu Yang. "Relative discriminant coefficient based multi-cue fusion for robust object tracking." Frontiers of Electrical and Electronic Engineering in China 3, no. 3 (April 17, 2008): 274–82. http://dx.doi.org/10.1007/s11460-008-0045-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Walia, Gurjit Singh, Ashish Kumar, Astitwa Saxena, Kapil Sharma, and Kuldeep Singh. "Robust object tracking with crow search optimized multi-cue particle filter." Pattern Analysis and Applications 23, no. 3 (August 29, 2019): 1439–55. http://dx.doi.org/10.1007/s10044-019-00847-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yi, Renjiao, Ping Tan, and Stephen Lin. "Leveraging Multi-View Image Sets for Unsupervised Intrinsic Image Decomposition and Highlight Separation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12685–92. http://dx.doi.org/10.1609/aaai.v34i07.6961.

Повний текст джерела
Анотація:
We present an unsupervised approach for factorizing object appearance into highlight, shading, and albedo layers, trained by multi-view real images. To do so, we construct a multi-view dataset by collecting numerous customer product photos online, which exhibit large illumination variations that make them suitable for training of reflectance separation and can facilitate object-level decomposition. The main contribution of our approach is a proposed image representation based on local color distributions that allows training to be insensitive to the local misalignments of multi-view images. In addition, we present a new guidance cue for unsupervised training that exploits synergy between highlight separation and intrinsic image decomposition. Over a broad range of objects, our technique is shown to yield state-of-the-art results for both of these tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Walia, Gurjit Singh, and Rajiv Kapoor. "Robust object tracking based upon adaptive multi-cue integration for video surveillance." Multimedia Tools and Applications 75, no. 23 (September 4, 2015): 15821–47. http://dx.doi.org/10.1007/s11042-015-2890-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kumar, Ashish, Gurjit Singh Walia, and Kapil Sharma. "A novel approach for multi-cue feature fusion for robust object tracking." Applied Intelligence 50, no. 10 (May 7, 2020): 3201–18. http://dx.doi.org/10.1007/s10489-020-01649-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shih, Chihhsiong. "Analyzing and Comparing Shot Planning Strategies and Their Effects on the Performance of an Augment Reality Based Billiard Training System." International Journal of Information Technology & Decision Making 13, no. 03 (May 2014): 521–65. http://dx.doi.org/10.1142/s0219622014500278.

Повний текст джерела
Анотація:
The shot planning of a cue after it collides with an object ball determines a player's success in a billiard game. This paper proposes three novel gaming strategies to investigate the effect of cue shots planning on gaming performance. The first algorithm considers the nearest pocket for every selected target object ball, seeking optimal post collision positions. The second algorithm considers all pocket and target object ball combinations during both the pre- and post-collision optimal shot selection processes. The third algorithm considers a multi-objective optimization process for optimal shot planning control. The simulations are conducted based on a collision model considering the restitution effects. An augmented reality training facility is devised to guide users in both aiming and cue repositioning control in a real-world billiard game. Experimental results not only prove the reliability of our training device in selecting a proper shot sequence using the all-pocket optimal shot planning algorithm, but it also proves the consistency with the restitution theory.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kwon, Young, Jae Jang, Youngbae Hwang, and Ouk Choi. "Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras." Sensors 19, no. 7 (March 29, 2019): 1539. http://dx.doi.org/10.3390/s19071539.

Повний текст джерела
Анотація:
RGB-Depth (RGB-D) cameras are widely used in computer vision and robotics applications such as 3D modeling and human–computer interaction. To capture 3D information of an object from different viewpoints simultaneously, we need to use multiple RGB-D cameras. To minimize costs, the cameras are often sparsely distributed without shared scene features. Due to the advantage of being visible from different viewpoints, spherical objects have been used for extrinsic calibration of widely-separated cameras. Assuming that the projected shape of the spherical object is circular, this paper presents a multi-cue-based method for detecting circular regions in a single color image. Experimental comparisons with existing methods show that our proposed method accurately detects spherical objects with cluttered backgrounds under different illumination conditions. The circle detection method is then applied to extrinsic calibration of multiple RGB-D cameras, for which we propose to use robust cost functions to reduce errors due to misdetected sphere centers. Through experiments, we show that the proposed method provides accurate calibration results in the presence of outliers and performs better than a least-squares-based method.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Okada, Kei, Mitsuharu Kojima, and Masayuki Inaba. "Object Recognition with Multi Visual Cue Integration for Shared Knowledge-based Action Recognition System." Journal of the Robotics Society of Japan 26, no. 6 (2008): 537–45. http://dx.doi.org/10.7210/jrsj.26.537.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "MULTI-CUE OBJECT"

1

KUMAR, ASHISH. "STUDY OF MULTI-CUE OBJECT TRACKING IN VIDEO SEQUENCES." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2020. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18764.

Повний текст джерела
Анотація:
Multi-cue object tracking is a challenging eld of computer vision. In particular, the challenges originate from environmental variations such as occlusion, similar background and illumination variations or due to variations in target's appearance such as pose variations, deformation, fast motion, scale and rotational changes. In order to address these variations, a lot of appearance model have been proposed but developing a robust appearance model by fusing multi-cue information is tedious and demands further investigation and research. It is essential to develop a multicue object tracking solution with adaptive fusion of cues which can handle various tracking challenges. The goal of this thesis is to propose robust multi-cue object tracking frameworks by exploiting the complementary features and their adaptive fusion in order to enhance tracker's performance and accuracy during tracking failures. A real-time tracker using particle lter under stochastic framework has been developed for target estimation. The inherent problems of particle lter namely, sample degeneracy and impoverishment have been addressed by proposing a resampling method based upon meta-heuristic optimization. In addition, an outlier detection mechanism is designed to reduce the computational complexity of the developed tracker. A robust tracking architecture has been proposed under deterministic framework. Fragment-based tracker with a discriminative classi er has been designed that can enhance tracker's performance during dynamic variations. Periodic and temporal viii update strategy is employed to make tracker adaptive to changing environment. Extensive experimental analysis has been performed to prove the e ectiveness of the developed tracking solution. Multi-stage tracker based on adaptive fusion of multi-cue has been developed for multi-cue object tracking. The rst stage of target rough localization improves the accuracy of tracker during precise localization. In the appearance model complementary cues are considered to handle illumination variations and occlusion. Classi er mechanism and fragment based appearance model are proposed to improve the tracker's accuracy during background clutters and fast motion. Experimental validation on multiple datasets validates the performance and accuracy of the proposed tracker.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Marton, Zoltan-Csaba [Verfasser], Michael [Akademischer Betreuer] Beetz, and Darius [Akademischer Betreuer] Burschka. "Multi-cue Perception for Robotic Object Manipulation : How Spatio-temporal Integration of Multi-modal Information Aids Task Execution / Zoltan-Csaba Marton. Gutachter: Michael Beetz ; Darius Burschka. Betreuer: Michael Beetz." München : Universitätsbibliothek der TU München, 2014. http://d-nb.info/1070639028/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Aboutalib, Sarah. "Multiple-Cue Object Recognition for Interactionable Objects." Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/19.

Повний текст джерела
Анотація:
Category-level object recognition is a fundamental capability for the potential use of robots in the assistance of humans in useful tasks. There have been numerous vision-based object recognition systems yielding fast and accurate results in constrained environments. However, by depending on visual cues, these techniques are susceptible to object variations in size, lighting, rotation, and pose, all of which cannot be avoided in real video data. Thus, the task of object recognition still remains very challenging. My thesis work builds upon the fact that robots can observe humans interacting with the objects in their environment. We refer to the set of objects, which can be involved in the interaction as `interactionable' objects. The interaction of humans with the `interactionable' objects provides numerous nonvisual cues to the identity of objects. In this thesis, I will introduce a flexible object recognition approach called Multiple-Cue Object Recognition (MCOR) that can use multiple cues of any predefined type, whether they are cues intrinsic to the object or provided by observation of a human. In pursuit of this goal, the thesis will provide several contributions: A representation for the multiple cues including an object definition that allows for the flexible addition of these cues; Weights that reflect the various strength of association between a particular cue and a particular object using a probabilistic relational model, as well as object displacement values for localizing the information in an image; Tools for defining visual features, segmentation, tracking, and the values for the non-visual cues; Lastly, an object recognition algorithm for the incremental discrimination of potential object categories. We evaluate these contributions through a number of methods including simulation to demonstrate the learning of weights and recognition based on an analytical model, an analytical model that demonstrates the robustness of the MCOR framework, and recognition results on real video data using a number of datasets including video taken from a humanoid robot (Sony QRIO), video captured from a meeting setting, scripted scenarios from outside universities, and unscripted TV cooking data. Using the datasets, we demonstrate the basic features of the MCOR algorithm including its ability to use multiple cues of different types. We demonstrate the applicability of MCOR to an outside dataset. We show that MCOR has better recognition results over vision-only recognition systems, and show that performance only improves with the addition of more cue types.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "MULTI-CUE OBJECT"

1

Kumar, Ashish, Gurjit Singh Walia, and Kapil Sharma. "Real-Time Multi-cue Object Tracking: Benchmark." In Proceedings of International Conference on IoT Inclusive Life (ICIIL 2019), NITTTR Chandigarh, India, 317–23. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3020-3_29.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wang, Jiangtao, Debao Chen, Suwen Li, and Yijun Yang. "Feature-Scoring-Based Multi-cue Infrared Object Tracking." In Intelligent Science and Intelligent Data Engineering, 25–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36669-7_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Giebel, J., D. M. Gavrila, and C. Schnörr. "A Bayesian Framework for Multi-cue 3D Object Tracking." In Lecture Notes in Computer Science, 241–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24673-2_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

García, Germán Martín, Dominik Alexander Klein, Jörg Stückler, Simone Frintrop, and Armin B. Cremers. "Adaptive Multi-cue 3D Tracking of Arbitrary Objects." In Lecture Notes in Computer Science, 357–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32717-9_36.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Su, Congyong, Hong Zhou, and Li Huang. "Multiple Facial Feature Tracking Using Multi-cue Based Prediction Model." In Articulated Motion and Deformable Objects, 214–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30074-8_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "MULTI-CUE OBJECT"

1

Aiping, Wang, Cheng Zhiquan, and Li Sikun. "Multi-cue Based Discriminative Visual Object Contour Tracking." In 2011 International Conference on Virtual Reality and Visualization (ICVRV). IEEE, 2011. http://dx.doi.org/10.1109/icvrv.2011.52.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Leibe, B., K. Mikolajczyk, and B. Schiele. "Segmentation Based Multi-Cue Integration for Object Detection." In British Machine Vision Conference 2006. British Machine Vision Association, 2006. http://dx.doi.org/10.5244/c.20.119.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Snidaro, Lauro, Ingrid Visentini, and Gian Luca Foresti. "Multi-sensor Multi-cue Fusion for Object Detection in Video Surveillance." In 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2009. http://dx.doi.org/10.1109/avss.2009.67.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gehrig, Stefan, Alexander Barth, Nicolai Schneider, and Jan Siegemund. "A multi-cue approach for stereo-based object confidence estimation." In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012). IEEE, 2012. http://dx.doi.org/10.1109/iros.2012.6385455.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

AL-Shakarji, Noor M., Filiz Bunyak, Hadi Aliakbarpour, Guna Seetharaman, and Kannappan Palaniappan. "Performance Evaluation of Semantic Video Compression using Multi-cue Object Detection." In 2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2019. http://dx.doi.org/10.1109/aipr47015.2019.9174601.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ahsan, Unaiza, Sohail Abdul Sattar, Humera Noor, and Munzir Zafar. "Multi-cue object detection and tracking for security in complex environments." In SPIE Defense, Security, and Sensing. SPIE, 2012. http://dx.doi.org/10.1117/12.918887.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jianqin Yin, Guohui Tian, and Yinghua Xue. "Family environmental service oriented multiple object tracking based on multi-cue method." In 2010 8th World Congress on Intelligent Control and Automation (WCICA 2010). IEEE, 2010. http://dx.doi.org/10.1109/wcica.2010.5554430.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Okada, Kei, Mitsuharu Kojima, Satoru Tokutsu, Toshiaki Maki, Yuto Mori, and Masayuki Inaba. "Multi-cue 3D object recognition in knowledge-based vision-guided humanoid robot system." In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2007. http://dx.doi.org/10.1109/iros.2007.4399245.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhang, Guoliang, Zexu Du, Xiangquan Zhang, Liping Wang, Zhouqiang He, and Xiaoyan Meng. "Multi-object tracking based on spatio-temporal cue fusion and optimized cascade matching." In 2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2022. http://dx.doi.org/10.1109/icsp54964.2022.9778545.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Aboutalib, Sarah, and Manuela Veloso. "Cue-based equivalence classes and incremental discrimination for multi-cue recognition of “interactionable” objects." In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009). IEEE, 2009. http://dx.doi.org/10.1109/iros.2009.5354330.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії