Academic literature on the topic 'Multimodal object tracking'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimodal object tracking.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multimodal object tracking"
Zhang, Liwei, Jiahong Lai, Zenghui Zhang, Zhen Deng, Bingwei He, and Yucheng He. "Multimodal Multiobject Tracking by Fusing Deep Appearance Features and Motion Information." Complexity 2020 (September 25, 2020): 1–10. http://dx.doi.org/10.1155/2020/8810340.
Full textKota, John S., and Antonia Papandreou-Suppappola. "Joint Design of Transmit Waveforms for Object Tracking in Coexisting Multimodal Sensing Systems." Sensors 19, no. 8 (April 12, 2019): 1753. http://dx.doi.org/10.3390/s19081753.
Full textMuresan, Mircea Paul, Ion Giosan, and Sergiu Nedevschi. "Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation." Sensors 20, no. 4 (February 18, 2020): 1110. http://dx.doi.org/10.3390/s20041110.
Full textMotlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger, and Oliver Thiergart. "Real-Time Audio-Visual Analysis for Multiperson Videoconferencing." Advances in Multimedia 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/175745.
Full textMonir, Islam A., Mohamed W. Fakhr, and Nashwa El-Bendary. "Multimodal deep learning model for human handover classification." Bulletin of Electrical Engineering and Informatics 11, no. 2 (April 1, 2022): 974–85. http://dx.doi.org/10.11591/eei.v11i2.3690.
Full textShibuya, Masaki, Kengo Ohnishi, and Isamu Kajitani. "Networked Multimodal Sensor Control of Powered 2-DOF Wrist and Hand." Journal of Robotics 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/7862178.
Full textKandylakis, Zacharias, Konstantinos Vasili, and Konstantinos Karantzalos. "Fusing Multimodal Video Data for Detecting Moving Objects/Targets in Challenging Indoor and Outdoor Scenes." Remote Sensing 11, no. 4 (February 21, 2019): 446. http://dx.doi.org/10.3390/rs11040446.
Full textKim, Jongwon, and Jeongho Cho. "RGDiNet: Efficient Onboard Object Detection with Faster R-CNN for Air-to-Ground Surveillance." Sensors 21, no. 5 (March 1, 2021): 1677. http://dx.doi.org/10.3390/s21051677.
Full textPopp, Constantin, and Damian T. Murphy. "Creating Audio Object-Focused Acoustic Environments for Room-Scale Virtual Reality." Applied Sciences 12, no. 14 (July 20, 2022): 7306. http://dx.doi.org/10.3390/app12147306.
Full textBirchfield, David, and Mina Johnson-Glenberg. "A Next Gen Interface for Embodied Learning." International Journal of Gaming and Computer-Mediated Simulations 2, no. 1 (January 2010): 49–58. http://dx.doi.org/10.4018/jgcms.2010010105.
Full textDissertations / Theses on the topic "Multimodal object tracking"
De, goussencourt Timothée. "Système multimodal de prévisualisation “on set” pour le cinéma." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT106/document.
Full textPreviz on-set is a preview step that takes place directly during the shootingphase of a film with special effects. The aim of previz on-set is to show to the film director anassembled view of the final plan in realtime. The work presented in this thesis focuses on aspecific step of the previz : the compositing. This step consists in mixing multiple images tocompose a single and coherent one. In our case, it is to mix computer graphics with an imagefrom the main camera. The objective of this thesis is to propose a system for automaticadjustment of the compositing. The method requires the measurement of the geometry ofthe scene filmed. For this reason, a depth sensor is added to the main camera. The data issent to the computer that executes an algorithm to merge data from depth sensor and themain camera. Through a hardware demonstrator, we formalized an integrated solution in avideo game engine. The experiments gives encouraging results for compositing in real time.Improved results were observed with the introduction of a joint segmentation method usingdepth and color information. The main strength of this work lies in the development of ademonstrator that allowed us to obtain effective algorithms in the field of previz on-set
Mozaffari, Maaref Mohammad Hamed. "A Real-Time and Automatic Ultrasound-Enhanced Multimodal Second Language Training System: A Deep Learning Approach." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40477.
Full textur, Réhman Shafiq. "Expressing emotions through vibration for perception and control." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990.
Full textTaktil Video
Khalidov, Vasil. "Modèles de mélanges conjugués pour la modélisation de la perception visuelle et auditive." Grenoble, 2010. http://www.theses.fr/2010GRENM064.
Full textIn this thesis, the modelling of audio-visual perception with a head-like device is considered. The related problems, namely audio-visual calibration, audio-visual object detection, localization and tracking are addressed. A spatio-temporal approach to the head-like device calibration is proposed based on probabilistic multimodal trajectory matching. The formalism of conjugate mixture models is introduced along with a family of efficient optimization algorithms to perform multimodal clustering. One instance of this algorithm family, namely the conjugate expectation maximization (ConjEM) algorithm is further improved to gain attractive theoretical properties. The multimodal object detection and object number estimation methods are developed, their theoretical properties are discussed. Finally, the proposed multimodal clustering method is combined with the object detection and object number estimation strategies and known tracking techniques to perform multimodal multiobject tracking. The performance is demonstrated on simulated data and the database of realistic audio-visual scenarios (CAVA database)
Rodríguez, Florez Sergio Alberto. "Contributions by vision systems to multi-sensor object localization and tracking for intelligent vehicles." Compiègne, 2010. http://www.theses.fr/2010COMP1910.
Full textAdvanced Driver Assistance Systems (ADAS) can improve road safety by supporting the driver through warnings in hazardous circumstances or triggering appropriate actions when facing imminent collision situations (e. G. Airbags, emergency brake systems, etc). In this context, the knowledge of the location and the speed of the surrounding mobile objects constitute a key information. Consequently, in this work, we focus on object detection, localization and tracking in dynamic scenes. Noticing the increasing presence of embedded multi-camera systems on vehicles and recognizing the effectiveness of lidar automotive systems to detect obstacles, we investigate stereo vision systems contributions to multi-modal perception of the environment geometry. In order to fuse geometrical information between lidar and vision system, we propose a calibration process which determines the extrinsic parameters between the exteroceptive sensors and quantifies the uncertainties of this estimation. We present a real-time visual odometry method which estimates the vehicle ego-motion and simplifies dynamic object motion analysis. Then, the integrity of the lidar-based object detection and tracking is increased by the means of a visual confirmation method that exploits stereo-vision 3D dense reconstruction in focused areas. Finally, a complete full scale automotive system integrating the considered perception modalities was implemented and tested experimentally in open road situations with an experimental car
Sattarov, Egor. "Etude et quantification de la contribution des systèmes de perception multimodale assistés par des informations de contexte pour la détection et le suivi d'objets dynamiques." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS354.
Full textThis thesis project will investigate and quantify the contribution of context-aided multimodal perception for detecting and tracking moving objects. This research study will be applied to the detection and recognition ofrelevant objects in road traffic environments for Intelligent Vehicles (IV). The results to be obtained will allow us to transpose the proposed concept to a wide range of state-of-the-art sensors and object classes by means of an integrative system approach involving learning methods. In particular, such learning methods will investigate how the embedding into an embodied system providing a multitude of different data sources, can be harnessed to learn 1) without, or with reduced, explicit supervision by exploiting correlations 2) incrementally, by adding to existing knowledge instead of complete retraining every time new data arrive 3) collectively, each learning instance in the system being trained in a way that ensures approximately optimal fusion. Concretely, a tight coupling between object classifiers in multiple modalities as well as geometric scene context extraction will be studied, first in theory, then in the context of road traffic. The novelty of the envisioned integration approach lies in the tight coupling between system components such as object segmentation, object tracking, scene geometry estimation and object categorization based on a probabilistic inference strategy. Such a strategy characterizes systems where all perception components broadcast and receive distributions of multiple possible results together with a probabilistic belief score. In this way, each processing component can take into account the results of other components at a much earlier stage (as compared to just combining final results), thus hugely increasing its computation power, while the application of Bayesian inference techniques will ensure that implausible inputs do not cause negative effects
Duarte, Diogo Ferreira. "The Multi-Object Tracking with Multimodal Information for Autonomous Surface Vehicles." Master's thesis, 2022. https://hdl.handle.net/10216/140667.
Full textBook chapters on the topic "Multimodal object tracking"
Landabaso, José Luis, and Montse Pardàs. "Foreground Regions Extraction and Characterization Towards Real-Time Object Tracking." In Machine Learning for Multimodal Interaction, 241–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11677482_21.
Full text"Software for Automatic Gaze and Face/Object Tracking and its Use for Early Diagnosis of Autism Spectrum Disorders." In Multimodal Interactive Systems Management, 147–62. EPFL Press, 2014. http://dx.doi.org/10.1201/b15535-14.
Full textDiao, Qian, Jianye Lu, Wei Hu, Yimin Zhang, and Gary Bradski. "DBN Models for Visual Tracking and Prediction." In Bayesian Network Technologies, 176–93. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-141-4.ch009.
Full textTung, Tony, and Takashi Matsuyama. "Visual Tracking Using Multimodal Particle Filter." In Computer Vision, 1072–90. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5204-8.ch044.
Full textConference papers on the topic "Multimodal object tracking"
Muresan, Mircea Paul, and Sergiu Nedevschi. "Multimodal sparse LIDAR object tracking in clutter." In 2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, 2018. http://dx.doi.org/10.1109/iccp.2018.8516646.
Full textMorrison, Katelyn, Daniel Yates, Maya Roman, and William W. Clark. "Using Object Tracking Techniques to Non-Invasively Measure Thoracic Rotation Range of Motion." In ICMI '20: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3395035.3425189.
Full textVyawahare, Vikram S., and Richard T. Stone. "Asymmetric Interface and Interactions for Bimanual Virtual Assembly With Haptics." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-71543.
Full textValverde, Francisco Rivera, Juana Valeria Hurtado, and Abhinav Valada. "There is More than Meets the Eye: Self-Supervised Multi-Object Detection and Tracking with Sound by Distilling Multimodal Knowledge." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.01144.
Full textMoraffah, Bahman, Cesar Brito, Bindya Venkatesh, and Antonia Papandreou-Suppappola. "Tracking Multiple Objects with Multimodal Dependent Measurements: Bayesian Nonparametric Modeling." In 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2019. http://dx.doi.org/10.1109/ieeeconf44664.2019.9048817.
Full textSmirnova, Yana, Aleksandr Mudruk, and Anna Makashova. "Lack of joint attention in preschoolers with different forms of atypical development." In Safety psychology and psychological safety: problems of interaction between theorists and practitioners. «Publishing company «World of science», LLC, 2020. http://dx.doi.org/10.15862/53mnnpk20-29.
Full textCatalán, José M., Jorge A. Díez, Arturo Bertomeu-Motos, Francisco J. Badesa, Rafael Puerto, José M. Sabater, and Nicolás García-Aracil. "Arquitectura de control multimodal para robótica asistencial." In Actas de las XXXVII Jornadas de Automática 7, 8 y 9 de septiembre de 2016, Madrid. Universidade da Coruña, Servizo de Publicacións, 2022. http://dx.doi.org/10.17979/spudc.9788497498081.1089.
Full text