Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Multi-target multi-camera tracking“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multi-target multi-camera tracking" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Multi-target multi-camera tracking"
He, Yuhang, Xing Wei, Xiaopeng Hong, Weiwei Shi und Yihong Gong. „Multi-Target Multi-Camera Tracking by Tracklet-to-Target Assignment“. IEEE Transactions on Image Processing 29 (2020): 5191–205. http://dx.doi.org/10.1109/tip.2020.2980070.
Der volle Inhalt der QuelleYoon, Kwangjin, Young-min Song und Moongu Jeon. „Multiple hypothesis tracking algorithm for multi-target multi-camera tracking with disjoint views“. IET Image Processing 12, Nr. 7 (01.07.2018): 1175–84. http://dx.doi.org/10.1049/iet-ipr.2017.1244.
Der volle Inhalt der QuelleHe, Li, Guoliang Liu, Guohui Tian, Jianhua Zhang und Ze Ji. „Efficient Multi-View Multi-Target Tracking Using a Distributed Camera Network“. IEEE Sensors Journal 20, Nr. 4 (15.02.2020): 2056–63. http://dx.doi.org/10.1109/jsen.2019.2949385.
Der volle Inhalt der QuelleWen, Longyin, Zhen Lei, Ming-Ching Chang, Honggang Qi und Siwei Lyu. „Multi-Camera Multi-Target Tracking with Space-Time-View Hyper-graph“. International Journal of Computer Vision 122, Nr. 2 (06.09.2016): 313–33. http://dx.doi.org/10.1007/s11263-016-0943-0.
Der volle Inhalt der QuelleLuo, Xiaohui, Fuqing Wang und Mingli Luo. „Collaborative target tracking in lopor with multi-camera“. Optik 127, Nr. 23 (Dezember 2016): 11588–98. http://dx.doi.org/10.1016/j.ijleo.2016.09.043.
Der volle Inhalt der QuelleXu, Jian, Chunjuan Bo und Dong Wang. „A novel multi-target multi-camera tracking approach based on feature grouping“. Computers & Electrical Engineering 92 (Juni 2021): 107153. http://dx.doi.org/10.1016/j.compeleceng.2021.107153.
Der volle Inhalt der QuelleJiang, Ming Xin, Hong Yu Wang und Chao Lin. „A Multi-Object Tracking Algorithm Based on Multi-Camera“. Applied Mechanics and Materials 135-136 (Oktober 2011): 70–75. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.70.
Der volle Inhalt der QuelleCastaldo, Francesco, und Francesco A. N. Palmieri. „Target tracking using factor graphs and multi-camera systems“. IEEE Transactions on Aerospace and Electronic Systems 51, Nr. 3 (Juli 2015): 1950–60. http://dx.doi.org/10.1109/taes.2015.140087.
Der volle Inhalt der QuelleBamrungthai, Pongsakon, und Viboon Sangveraphunsiri. „CU-Track: A Multi-Camera Framework for Real-Time Multi-Object Tracking“. Applied Mechanics and Materials 415 (September 2013): 325–32. http://dx.doi.org/10.4028/www.scientific.net/amm.415.325.
Der volle Inhalt der QuelleLiu, Jian, Kuangrong Hao, Yongsheng Ding, Shiyu Yang und Lei Gao. „Multi-State Self-Learning Template Library Updating Approach for Multi-Camera Human Tracking in Complex Scenes“. International Journal of Pattern Recognition and Artificial Intelligence 31, Nr. 12 (17.09.2017): 1755016. http://dx.doi.org/10.1142/s0218001417550163.
Der volle Inhalt der QuelleDissertationen zum Thema "Multi-target multi-camera tracking"
Turesson, Eric. „Multi-camera Computer Vision for Object Tracking: A comparative study“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21810.
Der volle Inhalt der QuelleAykin, Murat Deniz. „Efficient Calibration Of A Multi-camera Measurement System Using A Target With Known Dynamics“. Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609798/index.pdf.
Der volle Inhalt der Quellestate&rdquo
of one or more real world objects. Camera calibration is the process of pre-determining all the remaining optical and geometric parameters of the measurement system which are either static or slowly varying. For a single camera, this consist of the internal parameters of the camera device optics and construction while for a multiple camera system, it also includes the geometric positioning of the individual cameras, namely &ldquo
external&rdquo
parameters. The calibration is a necessary step before any actual state measurements can be made from the system. In this thesis, such a multi-camera state measurement system and in particular the problem of procedurally effective and high performance calibration of such a system is considered. This thesis presents a novel calibration algorithm which uses the known dynamics of a ballistically thrown target object and employs the Extended Kalman Filter (EKF) to calibrate the multi-camera system. The state-space representation of the target state is augmented with the unknown calibration parameters which are assumed to be static or slowly varying with respect to the state. This results in a &ldquo
super-state&rdquo
vector. The EKF algorithm is used to recursively estimate this super-state hence resulting in the estimates of the static camera parameters. It is demonstrated by both simulation studies as well as actual experiments that when the ballistic path of the target is processed by the improved versions of the EKF algorithm, the camera calibration parameter estimates asymptotically converge to their actual values. Since the image frames of the target trajectory can be acquired first and then processed off-line, subsequent improvements of the EKF algorithm include repeated and bidirectional versions where the same calibration images are repeatedly used. Repeated EKF (R-EKF) provides convergence with a limited number of image frames when the initial target state is accurately provided while its bidirectional version (RB-EKF) improves calibration accuracy by also estimating the initial target state. The primary contribution of the approach is that it provides a fast calibration procedure where there is no need for any standard or custom made calibration target plates covering the majority of camera field-of-view. Also, human assistance is minimized since all frame data is processed automatically and assistance is limited to making the target throws. The speed of convergence and accuracy of the results promise a field-applicable calibration procedure.
Vestin, Albin, und Gustav Strandberg. „Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms“. Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.
Der volle Inhalt der QuelleHsu, Shun-Hsiang, und 許舜翔. „Indoor Occupant Behavior Analysis with Multi-Target, Multi-Camera Tracking“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/26vnu7.
Der volle Inhalt der Quelle國立臺灣大學
土木工程學研究所
107
During the development of society, buildings and people’s daily activities are inseparable. Therefore, the indoor environment has a great impact on the quality of life and researches on building management systems were devoted to achieve the goal of enhancing energy efficiency and occupant comfort. With the increasing trend of IoT application, data analytics approaches had emerged and they can be applied to understand better indoor environment. However, it is problematic in practice to adopt sensing technology due to stochastic nature of occupant behaviors and large-scale monitoring area. Therefore, a cost-effective and accurate method is required to collect data regarding occupant behavior. This research aims to implement a re-identification system for multi-target, multi-camera tracking with surveillance cameras to obtain more reliable occupancy data. In recent years, tracking combined with deep learning techniques has better performance and more robust to visual obstacles like dim-lighting or being partially obstructed than traditional approaches. The advance in tracking gives the opportunity to develop an application for behavior analysis of building occupants. This research proposes the distributed system for tracking cross non-overlapping cameras. Firstly, multiple object tracking is performed under each camera; then, the probe images of occupants provide appearance and location information. Secondly, feature vectors extracted from the images by the convolutional neural network are used to concatenate trajectory data from different cameras. Finally, the concatenated data are analyzed for usage rate of spaces and their distribution in building levels. Moreover, abnormal situations can be detected and tracked cross multiple cameras. With the analysis, the building manager can not only validate and revise the energy strategy but also enhance public safety and better handle emergency conditions.
Chuan-Wen, Lai. „Multi-Target Visual Tracking by Bayesian Filtering with Occlusion Handling on an Active Camera Platform“. 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2407200604203700.
Der volle Inhalt der QuelleLai, Chuan-Wen, und 賴傳文. „Multi-Target Visual Tracking by Bayesian Filtering with Occlusion Handling on an Active Camera Platform“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/32024337980128245706.
Der volle Inhalt der Quelle國立臺灣大學
電機工程學研究所
94
In visual tracking, multi-target tracking (MTT) systems encounter the problem that unavoidably moving targets may occlude each other and the measurement process of each target becomes dependent. We construct a tracking system with considering joint image likelihood to recognize targets, even though the appearances of the target are identical. Also, the multiple hypotheses of the targets’ depth level are utilized for occlusion handling. In order to enhance system performance, we extend the sampling importance resampling (SIR) particle filter with the separated importance functions for tracking each target and detection. Furthermore, when targets occlude together, the state vector of these targets is transferred into a joint state vector, and the MCMC (Markov Chain Monte Carlo) based particle filter is then proposed for efficient sampling in the high-dimensional joint state during occlusion. Furthermore, a control strategy for the active camera is proposed in order to move the camera such that the surveillance area will contain the most information. The overall performance is validated in the experiments and shows the robustness with real-time tracking.
Buchteile zum Thema "Multi-target multi-camera tracking"
Benabdelkader, Chiraz, Philippe Burlina und Larry Davis. „Single Camera Multiplexing for Multi-Target Tracking“. In Multimedia Video-Based Surveillance Systems, 130–42. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4327-5_12.
Der volle Inhalt der QuelleRistani, Ergys, Francesco Solera, Roger Zou, Rita Cucchiara und Carlo Tomasi. „Performance Measures and a Data Set for Multi-target, Multi-camera Tracking“. In Lecture Notes in Computer Science, 17–35. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48881-3_2.
Der volle Inhalt der QuelleWang, Jingjing, und Nenghai Yu. „Multi-target Tracking via Max-Entropy Target Selection and Heterogeneous Camera Fusion“. In Lecture Notes in Computer Science, 149–59. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24075-6_15.
Der volle Inhalt der QuelleDi Caterina, Gaetano, Trushali Doshi, John J. Soraghan und Lykourgos Petropoulakis. „A Novel Decentralised System Architecture for Multi-camera Target Tracking“. In Advanced Concepts for Intelligent Vision Systems, 92–104. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48680-2_9.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Multi-target multi-camera tracking"
Specker, Andreas, Daniel Stadler, Lucas Florin und Jurgen Beyerer. „An Occlusion-aware Multi-target Multi-camera Tracking System“. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00471.
Der volle Inhalt der QuelleHussain, Muddsser, Rong Xie, Liang Zhang, Mehmood Nawaz und Malik Asfandyar. „Multi-target tracking identification system under multi-camera surveillance system“. In 2016 International Conference on Progress in Informatics and Computing (PIC). IEEE, 2016. http://dx.doi.org/10.1109/pic.2016.7949516.
Der volle Inhalt der QuelleRistani, Ergys, und Carlo Tomasi. „Features for Multi-target Multi-camera Tracking and Re-identification“. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00632.
Der volle Inhalt der QuelleChou, Yu-Sheng, Chien-Yao Wang, Ming-Chiao Chen, Shou-De Lin und Hong-Yuan Mark Liao. „Dynamic Gallery for Real-Time Multi-Target Multi-Camera Tracking“. In 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2019. http://dx.doi.org/10.1109/avss.2019.8909837.
Der volle Inhalt der Quelle„MULTI-CAMERA DETECTION AND MULTI-TARGET TRACKING - Traffic Surveillance Applications“. In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2008. http://dx.doi.org/10.5220/0001085705850591.
Der volle Inhalt der QuelleAkinci, Umur, Ugur Halici und Kemal Leblebicioglu. „Single camera multi-target tracking by fuzzy target-track association“. In 2010 IEEE 18th Signal Processing and Communications Applications Conference (SIU). IEEE, 2010. http://dx.doi.org/10.1109/siu.2010.5653731.
Der volle Inhalt der QuelleWang, Mingkun, Dianxi Shi, Naiyang Guan, Wei Yi, Tao Zhang und Zunlin Fan. „Multi-Target Multi-Camera Tracking with Human Body Part Semantic Features“. In CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3358029.
Der volle Inhalt der QuelleZhang, Xindi, und Ebroul Izquierdo. „Real-Time Multi-Target Multi-Camera Tracking with Spatial-Temporal Information“. In 2019 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2019. http://dx.doi.org/10.1109/vcip47243.2019.8965845.
Der volle Inhalt der QuelleLi, Peng, Jiabin Zhang, Zheng Zhu, Yanwei Li, Lu Jiang und Guan Huang. „State-Aware Re-Identification Feature for Multi-Target Multi-Camera Tracking“. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2019. http://dx.doi.org/10.1109/cvprw.2019.00192.
Der volle Inhalt der QuelleShim, Kyujin, Sungjoon Yoon, Kangwook Ko und Changick Kim. „Multi-Target Multi-Camera Vehicle Tracking for City-Scale Traffic Management“. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00473.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Multi-target multi-camera tracking"
Anderson, Robert J. Multi-target camera tracking, hand-off and display LDRD 158819 final report. Office of Scientific and Technical Information (OSTI), Oktober 2014. http://dx.doi.org/10.2172/1323373.
Der volle Inhalt der QuelleAnderson, Robert J. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report. Office of Scientific and Technical Information (OSTI), Oktober 2014. http://dx.doi.org/10.2172/1323947.
Der volle Inhalt der Quelle