Gotowa bibliografia na temat „MULTI-CUE OBJECT”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „MULTI-CUE OBJECT”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "MULTI-CUE OBJECT"

1

Hu, Mengjie, Zhen Liu, Jingyu Zhang i Guangjun Zhang. "Robust object tracking via multi-cue fusion". Signal Processing 139 (październik 2017): 86–95. http://dx.doi.org/10.1016/j.sigpro.2017.04.008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Gu, Lichuan, Chengji Wang, Jinqin Zhong, Jianxiao Liu i Juan Wang. "Multi-cue Integration Object Tracking Based on Blocking". International Journal of Security and Its Applications 8, nr 3 (31.05.2014): 309–24. http://dx.doi.org/10.14257/ijsia.2014.8.3.32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Jiangtao, i Jingyu Yang. "Relative discriminant coefficient based multi-cue fusion for robust object tracking". Frontiers of Electrical and Electronic Engineering in China 3, nr 3 (17.04.2008): 274–82. http://dx.doi.org/10.1007/s11460-008-0045-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Walia, Gurjit Singh, Ashish Kumar, Astitwa Saxena, Kapil Sharma i Kuldeep Singh. "Robust object tracking with crow search optimized multi-cue particle filter". Pattern Analysis and Applications 23, nr 3 (29.08.2019): 1439–55. http://dx.doi.org/10.1007/s10044-019-00847-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Yi, Renjiao, Ping Tan i Stephen Lin. "Leveraging Multi-View Image Sets for Unsupervised Intrinsic Image Decomposition and Highlight Separation". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 12685–92. http://dx.doi.org/10.1609/aaai.v34i07.6961.

Pełny tekst źródła
Streszczenie:
We present an unsupervised approach for factorizing object appearance into highlight, shading, and albedo layers, trained by multi-view real images. To do so, we construct a multi-view dataset by collecting numerous customer product photos online, which exhibit large illumination variations that make them suitable for training of reflectance separation and can facilitate object-level decomposition. The main contribution of our approach is a proposed image representation based on local color distributions that allows training to be insensitive to the local misalignments of multi-view images. In addition, we present a new guidance cue for unsupervised training that exploits synergy between highlight separation and intrinsic image decomposition. Over a broad range of objects, our technique is shown to yield state-of-the-art results for both of these tasks.
Style APA, Harvard, Vancouver, ISO itp.
6

Walia, Gurjit Singh, i Rajiv Kapoor. "Robust object tracking based upon adaptive multi-cue integration for video surveillance". Multimedia Tools and Applications 75, nr 23 (4.09.2015): 15821–47. http://dx.doi.org/10.1007/s11042-015-2890-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kumar, Ashish, Gurjit Singh Walia i Kapil Sharma. "A novel approach for multi-cue feature fusion for robust object tracking". Applied Intelligence 50, nr 10 (7.05.2020): 3201–18. http://dx.doi.org/10.1007/s10489-020-01649-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Shih, Chihhsiong. "Analyzing and Comparing Shot Planning Strategies and Their Effects on the Performance of an Augment Reality Based Billiard Training System". International Journal of Information Technology & Decision Making 13, nr 03 (maj 2014): 521–65. http://dx.doi.org/10.1142/s0219622014500278.

Pełny tekst źródła
Streszczenie:
The shot planning of a cue after it collides with an object ball determines a player's success in a billiard game. This paper proposes three novel gaming strategies to investigate the effect of cue shots planning on gaming performance. The first algorithm considers the nearest pocket for every selected target object ball, seeking optimal post collision positions. The second algorithm considers all pocket and target object ball combinations during both the pre- and post-collision optimal shot selection processes. The third algorithm considers a multi-objective optimization process for optimal shot planning control. The simulations are conducted based on a collision model considering the restitution effects. An augmented reality training facility is devised to guide users in both aiming and cue repositioning control in a real-world billiard game. Experimental results not only prove the reliability of our training device in selecting a proper shot sequence using the all-pocket optimal shot planning algorithm, but it also proves the consistency with the restitution theory.
Style APA, Harvard, Vancouver, ISO itp.
9

Kwon, Young, Jae Jang, Youngbae Hwang i Ouk Choi. "Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras". Sensors 19, nr 7 (29.03.2019): 1539. http://dx.doi.org/10.3390/s19071539.

Pełny tekst źródła
Streszczenie:
RGB-Depth (RGB-D) cameras are widely used in computer vision and robotics applications such as 3D modeling and human–computer interaction. To capture 3D information of an object from different viewpoints simultaneously, we need to use multiple RGB-D cameras. To minimize costs, the cameras are often sparsely distributed without shared scene features. Due to the advantage of being visible from different viewpoints, spherical objects have been used for extrinsic calibration of widely-separated cameras. Assuming that the projected shape of the spherical object is circular, this paper presents a multi-cue-based method for detecting circular regions in a single color image. Experimental comparisons with existing methods show that our proposed method accurately detects spherical objects with cluttered backgrounds under different illumination conditions. The circle detection method is then applied to extrinsic calibration of multiple RGB-D cameras, for which we propose to use robust cost functions to reduce errors due to misdetected sphere centers. Through experiments, we show that the proposed method provides accurate calibration results in the presence of outliers and performs better than a least-squares-based method.
Style APA, Harvard, Vancouver, ISO itp.
10

Okada, Kei, Mitsuharu Kojima i Masayuki Inaba. "Object Recognition with Multi Visual Cue Integration for Shared Knowledge-based Action Recognition System". Journal of the Robotics Society of Japan 26, nr 6 (2008): 537–45. http://dx.doi.org/10.7210/jrsj.26.537.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "MULTI-CUE OBJECT"

1

KUMAR, ASHISH. "STUDY OF MULTI-CUE OBJECT TRACKING IN VIDEO SEQUENCES". Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2020. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18764.

Pełny tekst źródła
Streszczenie:
Multi-cue object tracking is a challenging eld of computer vision. In particular, the challenges originate from environmental variations such as occlusion, similar background and illumination variations or due to variations in target's appearance such as pose variations, deformation, fast motion, scale and rotational changes. In order to address these variations, a lot of appearance model have been proposed but developing a robust appearance model by fusing multi-cue information is tedious and demands further investigation and research. It is essential to develop a multicue object tracking solution with adaptive fusion of cues which can handle various tracking challenges. The goal of this thesis is to propose robust multi-cue object tracking frameworks by exploiting the complementary features and their adaptive fusion in order to enhance tracker's performance and accuracy during tracking failures. A real-time tracker using particle lter under stochastic framework has been developed for target estimation. The inherent problems of particle lter namely, sample degeneracy and impoverishment have been addressed by proposing a resampling method based upon meta-heuristic optimization. In addition, an outlier detection mechanism is designed to reduce the computational complexity of the developed tracker. A robust tracking architecture has been proposed under deterministic framework. Fragment-based tracker with a discriminative classi er has been designed that can enhance tracker's performance during dynamic variations. Periodic and temporal viii update strategy is employed to make tracker adaptive to changing environment. Extensive experimental analysis has been performed to prove the e ectiveness of the developed tracking solution. Multi-stage tracker based on adaptive fusion of multi-cue has been developed for multi-cue object tracking. The rst stage of target rough localization improves the accuracy of tracker during precise localization. In the appearance model complementary cues are considered to handle illumination variations and occlusion. Classi er mechanism and fragment based appearance model are proposed to improve the tracker's accuracy during background clutters and fast motion. Experimental validation on multiple datasets validates the performance and accuracy of the proposed tracker.
Style APA, Harvard, Vancouver, ISO itp.
2

Marton, Zoltan-Csaba [Verfasser], Michael [Akademischer Betreuer] Beetz i Darius [Akademischer Betreuer] Burschka. "Multi-cue Perception for Robotic Object Manipulation : How Spatio-temporal Integration of Multi-modal Information Aids Task Execution / Zoltan-Csaba Marton. Gutachter: Michael Beetz ; Darius Burschka. Betreuer: Michael Beetz". München : Universitätsbibliothek der TU München, 2014. http://d-nb.info/1070639028/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Aboutalib, Sarah. "Multiple-Cue Object Recognition for Interactionable Objects". Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/19.

Pełny tekst źródła
Streszczenie:
Category-level object recognition is a fundamental capability for the potential use of robots in the assistance of humans in useful tasks. There have been numerous vision-based object recognition systems yielding fast and accurate results in constrained environments. However, by depending on visual cues, these techniques are susceptible to object variations in size, lighting, rotation, and pose, all of which cannot be avoided in real video data. Thus, the task of object recognition still remains very challenging. My thesis work builds upon the fact that robots can observe humans interacting with the objects in their environment. We refer to the set of objects, which can be involved in the interaction as `interactionable' objects. The interaction of humans with the `interactionable' objects provides numerous nonvisual cues to the identity of objects. In this thesis, I will introduce a flexible object recognition approach called Multiple-Cue Object Recognition (MCOR) that can use multiple cues of any predefined type, whether they are cues intrinsic to the object or provided by observation of a human. In pursuit of this goal, the thesis will provide several contributions: A representation for the multiple cues including an object definition that allows for the flexible addition of these cues; Weights that reflect the various strength of association between a particular cue and a particular object using a probabilistic relational model, as well as object displacement values for localizing the information in an image; Tools for defining visual features, segmentation, tracking, and the values for the non-visual cues; Lastly, an object recognition algorithm for the incremental discrimination of potential object categories. We evaluate these contributions through a number of methods including simulation to demonstrate the learning of weights and recognition based on an analytical model, an analytical model that demonstrates the robustness of the MCOR framework, and recognition results on real video data using a number of datasets including video taken from a humanoid robot (Sony QRIO), video captured from a meeting setting, scripted scenarios from outside universities, and unscripted TV cooking data. Using the datasets, we demonstrate the basic features of the MCOR algorithm including its ability to use multiple cues of different types. We demonstrate the applicability of MCOR to an outside dataset. We show that MCOR has better recognition results over vision-only recognition systems, and show that performance only improves with the addition of more cue types.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "MULTI-CUE OBJECT"

1

Kumar, Ashish, Gurjit Singh Walia i Kapil Sharma. "Real-Time Multi-cue Object Tracking: Benchmark". W Proceedings of International Conference on IoT Inclusive Life (ICIIL 2019), NITTTR Chandigarh, India, 317–23. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3020-3_29.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, Jiangtao, Debao Chen, Suwen Li i Yijun Yang. "Feature-Scoring-Based Multi-cue Infrared Object Tracking". W Intelligent Science and Intelligent Data Engineering, 25–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36669-7_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Giebel, J., D. M. Gavrila i C. Schnörr. "A Bayesian Framework for Multi-cue 3D Object Tracking". W Lecture Notes in Computer Science, 241–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24673-2_20.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

García, Germán Martín, Dominik Alexander Klein, Jörg Stückler, Simone Frintrop i Armin B. Cremers. "Adaptive Multi-cue 3D Tracking of Arbitrary Objects". W Lecture Notes in Computer Science, 357–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32717-9_36.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Su, Congyong, Hong Zhou i Li Huang. "Multiple Facial Feature Tracking Using Multi-cue Based Prediction Model". W Articulated Motion and Deformable Objects, 214–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30074-8_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "MULTI-CUE OBJECT"

1

Aiping, Wang, Cheng Zhiquan i Li Sikun. "Multi-cue Based Discriminative Visual Object Contour Tracking". W 2011 International Conference on Virtual Reality and Visualization (ICVRV). IEEE, 2011. http://dx.doi.org/10.1109/icvrv.2011.52.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Leibe, B., K. Mikolajczyk i B. Schiele. "Segmentation Based Multi-Cue Integration for Object Detection". W British Machine Vision Conference 2006. British Machine Vision Association, 2006. http://dx.doi.org/10.5244/c.20.119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Snidaro, Lauro, Ingrid Visentini i Gian Luca Foresti. "Multi-sensor Multi-cue Fusion for Object Detection in Video Surveillance". W 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2009. http://dx.doi.org/10.1109/avss.2009.67.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Gehrig, Stefan, Alexander Barth, Nicolai Schneider i Jan Siegemund. "A multi-cue approach for stereo-based object confidence estimation". W 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012). IEEE, 2012. http://dx.doi.org/10.1109/iros.2012.6385455.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

AL-Shakarji, Noor M., Filiz Bunyak, Hadi Aliakbarpour, Guna Seetharaman i Kannappan Palaniappan. "Performance Evaluation of Semantic Video Compression using Multi-cue Object Detection". W 2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2019. http://dx.doi.org/10.1109/aipr47015.2019.9174601.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Ahsan, Unaiza, Sohail Abdul Sattar, Humera Noor i Munzir Zafar. "Multi-cue object detection and tracking for security in complex environments". W SPIE Defense, Security, and Sensing. SPIE, 2012. http://dx.doi.org/10.1117/12.918887.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Jianqin Yin, Guohui Tian i Yinghua Xue. "Family environmental service oriented multiple object tracking based on multi-cue method". W 2010 8th World Congress on Intelligent Control and Automation (WCICA 2010). IEEE, 2010. http://dx.doi.org/10.1109/wcica.2010.5554430.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Okada, Kei, Mitsuharu Kojima, Satoru Tokutsu, Toshiaki Maki, Yuto Mori i Masayuki Inaba. "Multi-cue 3D object recognition in knowledge-based vision-guided humanoid robot system". W 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2007. http://dx.doi.org/10.1109/iros.2007.4399245.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Guoliang, Zexu Du, Xiangquan Zhang, Liping Wang, Zhouqiang He i Xiaoyan Meng. "Multi-object tracking based on spatio-temporal cue fusion and optimized cascade matching". W 2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2022. http://dx.doi.org/10.1109/icsp54964.2022.9778545.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Aboutalib, Sarah, i Manuela Veloso. "Cue-based equivalence classes and incremental discrimination for multi-cue recognition of “interactionable” objects". W 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009). IEEE, 2009. http://dx.doi.org/10.1109/iros.2009.5354330.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii