Auswahl der wissenschaftlichen Literatur zum Thema „Multi-target multi-camera tracking“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multi-target multi-camera tracking" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Multi-target multi-camera tracking"

1

He, Yuhang, Xing Wei, Xiaopeng Hong, Weiwei Shi und Yihong Gong. „Multi-Target Multi-Camera Tracking by Tracklet-to-Target Assignment“. IEEE Transactions on Image Processing 29 (2020): 5191–205. http://dx.doi.org/10.1109/tip.2020.2980070.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yoon, Kwangjin, Young-min Song und Moongu Jeon. „Multiple hypothesis tracking algorithm for multi-target multi-camera tracking with disjoint views“. IET Image Processing 12, Nr. 7 (01.07.2018): 1175–84. http://dx.doi.org/10.1049/iet-ipr.2017.1244.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

He, Li, Guoliang Liu, Guohui Tian, Jianhua Zhang und Ze Ji. „Efficient Multi-View Multi-Target Tracking Using a Distributed Camera Network“. IEEE Sensors Journal 20, Nr. 4 (15.02.2020): 2056–63. http://dx.doi.org/10.1109/jsen.2019.2949385.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wen, Longyin, Zhen Lei, Ming-Ching Chang, Honggang Qi und Siwei Lyu. „Multi-Camera Multi-Target Tracking with Space-Time-View Hyper-graph“. International Journal of Computer Vision 122, Nr. 2 (06.09.2016): 313–33. http://dx.doi.org/10.1007/s11263-016-0943-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Luo, Xiaohui, Fuqing Wang und Mingli Luo. „Collaborative target tracking in lopor with multi-camera“. Optik 127, Nr. 23 (Dezember 2016): 11588–98. http://dx.doi.org/10.1016/j.ijleo.2016.09.043.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Xu, Jian, Chunjuan Bo und Dong Wang. „A novel multi-target multi-camera tracking approach based on feature grouping“. Computers & Electrical Engineering 92 (Juni 2021): 107153. http://dx.doi.org/10.1016/j.compeleceng.2021.107153.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jiang, Ming Xin, Hong Yu Wang und Chao Lin. „A Multi-Object Tracking Algorithm Based on Multi-Camera“. Applied Mechanics and Materials 135-136 (Oktober 2011): 70–75. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.70.

Der volle Inhalt der Quelle
Annotation:
As a basic aspect of computer vision, reliable tracking of multiple objects is still an open and challenging issue for both theory studies and real applications. A novel multi-object tracking algorithm based on multiple cameras is proposed in this paper. We obtain the foreground likelihood maps in each view by modeling the background using the codebook algorithm. The view-to-view homographies are computed using several landmarks on the chosen plane. Then, we achieve the location information of multi-target at chest layer and realize the tracking task. The proposed algorithm does not require detecting the vanishing points of cameras, which reduces the complexity and improves the accuracy of the algorithm. The experimental results show that our method is robust to the occlusion and could satisfy the real-time tracking requirement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Castaldo, Francesco, und Francesco A. N. Palmieri. „Target tracking using factor graphs and multi-camera systems“. IEEE Transactions on Aerospace and Electronic Systems 51, Nr. 3 (Juli 2015): 1950–60. http://dx.doi.org/10.1109/taes.2015.140087.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bamrungthai, Pongsakon, und Viboon Sangveraphunsiri. „CU-Track: A Multi-Camera Framework for Real-Time Multi-Object Tracking“. Applied Mechanics and Materials 415 (September 2013): 325–32. http://dx.doi.org/10.4028/www.scientific.net/amm.415.325.

Der volle Inhalt der Quelle
Annotation:
This paper presents CU-Track, a multi-camera framework for real-time multi-object tracking. The developed framework includes a processing unit, the target object, and the multi-object tracking algorithm. A PC-cluster has been developed as the processing unit of the framework to process data in real-time. To setup the PC-cluster, two PCs are connected by using PCI interface cards that memory can be shared between the two PCs to ensure high speed data transfer and low latency. A novel mechanism for PC-to-PC communication is proposed. It is realized by a dedicated software processing module called the Cluster Module. Six processing modules have been implemented to realize system operations such as camera calibration, camera synchronization and 3D reconstruction of each target. Multiple spherical objects with the same size are used as the targets to be tracked. Two configurations of them, active and passive, can be used for tracking by the system. The algorithm based on Kalman filter and nearest neighbor searching is developed for multi-object tracking. Two applications have been implemented on the system, which confirm the validity and effectiveness of the developed framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Jian, Kuangrong Hao, Yongsheng Ding, Shiyu Yang und Lei Gao. „Multi-State Self-Learning Template Library Updating Approach for Multi-Camera Human Tracking in Complex Scenes“. International Journal of Pattern Recognition and Artificial Intelligence 31, Nr. 12 (17.09.2017): 1755016. http://dx.doi.org/10.1142/s0218001417550163.

Der volle Inhalt der Quelle
Annotation:
In multi-camera video tracking, the tracking scene and tracking-target appearance can become complex, and current tracking methods use entirely different databases and evaluation criteria. Herein, for the first time to our knowledge, we present a universally applicable template library updating approach for multi-camera human tracking called multi-state self-learning template library updating (RS-TLU), which can be applied in different multi-camera tracking algorithms. In RS-TLU, self-learning divides tracking results into three states, namely steady state, gradually changing state, and suddenly changing state, by using the similarity of objects with historical templates and instantaneous templates because every state requires a different decision strategy. Subsequently, the tracking results for each state are judged and learned with motion and occlusion information. Finally, the correct template is chosen in the robust template library. We investigate the effectiveness of the proposed method using three databases and 42 test videos, and calculate the number of false positives, false matches, and missing tracking targets. Experimental results demonstrate that, in comparison with the state-of-the-art algorithms for 15 complex scenes, our RS-TLU approach effectively improves the number of correct target templates and reduces the number of similar templates and error templates in the template library.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Multi-target multi-camera tracking"

1

Turesson, Eric. „Multi-camera Computer Vision for Object Tracking: A comparative study“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21810.

Der volle Inhalt der Quelle
Annotation:
Background: Video surveillance is a growing area where it can help with deterring crime, support investigation or to help gather statistics. These are just some areas where video surveillance can aid society. However, there is an improvement that could increase the efficiency of video surveillance by introducing tracking. More specifically, tracking between cameras in a network. Automating this process could reduce the need for humans to monitor and review since the tracking can track and inform the relevant people on its own. This has a wide array of usability areas, such as forensic investigation, crime alerting, or tracking down people who have disappeared. Objectives: What we want to investigate is the common setup of real-time multi-target multi-camera tracking (MTMCT) systems. Next up, we want to investigate how the components in an MTMCT system affect each other and the complete system. Lastly, we want to see how image enhancement can affect the MTMCT. Methods: To achieve our objectives, we have conducted a systematic literature review to gather information. Using the information, we implemented an MTMCT system where we evaluated the components to see how they interact in the complete system. Lastly, we implemented two image enhancement techniques to see how they affect the MTMCT. Results: As we have discovered, most often, MTMCT is constructed using a detection for discovering object, tracking to keep track of the objects in a single camera and a re-identification method to ensure that objects across cameras have the same ID. The different components have quite a considerable effect on each other where they can sabotage and improve each other. An example could be that the quality of the bounding boxes affect the data which re-identification can extract. We discovered that the image enhancement we used did not introduce any significant improvement. Conclusions: The most common structure for MTMCT are detection, tracking and re-identification. From our finding, we can see that all the component affect each other, but re-identification is the one that is mostly affected by the other components and the image enhancement. The two tested image enhancement techniques could not introduce enough improvement, but other image enhancement could be used to make the MTMCT perform better. The MTMCT system we constructed did not manage to reach real-time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Aykin, Murat Deniz. „Efficient Calibration Of A Multi-camera Measurement System Using A Target With Known Dynamics“. Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609798/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Multi camera measurement systems are widely used to extract information about the 3D configuration or &ldquo
state&rdquo
of one or more real world objects. Camera calibration is the process of pre-determining all the remaining optical and geometric parameters of the measurement system which are either static or slowly varying. For a single camera, this consist of the internal parameters of the camera device optics and construction while for a multiple camera system, it also includes the geometric positioning of the individual cameras, namely &ldquo
external&rdquo
parameters. The calibration is a necessary step before any actual state measurements can be made from the system. In this thesis, such a multi-camera state measurement system and in particular the problem of procedurally effective and high performance calibration of such a system is considered. This thesis presents a novel calibration algorithm which uses the known dynamics of a ballistically thrown target object and employs the Extended Kalman Filter (EKF) to calibrate the multi-camera system. The state-space representation of the target state is augmented with the unknown calibration parameters which are assumed to be static or slowly varying with respect to the state. This results in a &ldquo
super-state&rdquo
vector. The EKF algorithm is used to recursively estimate this super-state hence resulting in the estimates of the static camera parameters. It is demonstrated by both simulation studies as well as actual experiments that when the ballistic path of the target is processed by the improved versions of the EKF algorithm, the camera calibration parameter estimates asymptotically converge to their actual values. Since the image frames of the target trajectory can be acquired first and then processed off-line, subsequent improvements of the EKF algorithm include repeated and bidirectional versions where the same calibration images are repeatedly used. Repeated EKF (R-EKF) provides convergence with a limited number of image frames when the initial target state is accurately provided while its bidirectional version (RB-EKF) improves calibration accuracy by also estimating the initial target state. The primary contribution of the approach is that it provides a fast calibration procedure where there is no need for any standard or custom made calibration target plates covering the majority of camera field-of-view. Also, human assistance is minimized since all frame data is processed automatically and assistance is limited to making the target throws. The speed of convergence and accuracy of the results promise a field-applicable calibration procedure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Vestin, Albin, und Gustav Strandberg. „Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms“. Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Der volle Inhalt der Quelle
Annotation:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hsu, Shun-Hsiang, und 許舜翔. „Indoor Occupant Behavior Analysis with Multi-Target, Multi-Camera Tracking“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/26vnu7.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣大學
土木工程學研究所
107
During the development of society, buildings and people’s daily activities are inseparable. Therefore, the indoor environment has a great impact on the quality of life and researches on building management systems were devoted to achieve the goal of enhancing energy efficiency and occupant comfort. With the increasing trend of IoT application, data analytics approaches had emerged and they can be applied to understand better indoor environment. However, it is problematic in practice to adopt sensing technology due to stochastic nature of occupant behaviors and large-scale monitoring area. Therefore, a cost-effective and accurate method is required to collect data regarding occupant behavior. This research aims to implement a re-identification system for multi-target, multi-camera tracking with surveillance cameras to obtain more reliable occupancy data. In recent years, tracking combined with deep learning techniques has better performance and more robust to visual obstacles like dim-lighting or being partially obstructed than traditional approaches. The advance in tracking gives the opportunity to develop an application for behavior analysis of building occupants. This research proposes the distributed system for tracking cross non-overlapping cameras. Firstly, multiple object tracking is performed under each camera; then, the probe images of occupants provide appearance and location information. Secondly, feature vectors extracted from the images by the convolutional neural network are used to concatenate trajectory data from different cameras. Finally, the concatenated data are analyzed for usage rate of spaces and their distribution in building levels. Moreover, abnormal situations can be detected and tracked cross multiple cameras. With the analysis, the building manager can not only validate and revise the energy strategy but also enhance public safety and better handle emergency conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chuan-Wen, Lai. „Multi-Target Visual Tracking by Bayesian Filtering with Occlusion Handling on an Active Camera Platform“. 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2407200604203700.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lai, Chuan-Wen, und 賴傳文. „Multi-Target Visual Tracking by Bayesian Filtering with Occlusion Handling on an Active Camera Platform“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/32024337980128245706.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣大學
電機工程學研究所
94
In visual tracking, multi-target tracking (MTT) systems encounter the problem that unavoidably moving targets may occlude each other and the measurement process of each target becomes dependent. We construct a tracking system with considering joint image likelihood to recognize targets, even though the appearances of the target are identical. Also, the multiple hypotheses of the targets’ depth level are utilized for occlusion handling. In order to enhance system performance, we extend the sampling importance resampling (SIR) particle filter with the separated importance functions for tracking each target and detection. Furthermore, when targets occlude together, the state vector of these targets is transferred into a joint state vector, and the MCMC (Markov Chain Monte Carlo) based particle filter is then proposed for efficient sampling in the high-dimensional joint state during occlusion. Furthermore, a control strategy for the active camera is proposed in order to move the camera such that the surveillance area will contain the most information. The overall performance is validated in the experiments and shows the robustness with real-time tracking.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Multi-target multi-camera tracking"

1

Benabdelkader, Chiraz, Philippe Burlina und Larry Davis. „Single Camera Multiplexing for Multi-Target Tracking“. In Multimedia Video-Based Surveillance Systems, 130–42. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4327-5_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ristani, Ergys, Francesco Solera, Roger Zou, Rita Cucchiara und Carlo Tomasi. „Performance Measures and a Data Set for Multi-target, Multi-camera Tracking“. In Lecture Notes in Computer Science, 17–35. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48881-3_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wang, Jingjing, und Nenghai Yu. „Multi-target Tracking via Max-Entropy Target Selection and Heterogeneous Camera Fusion“. In Lecture Notes in Computer Science, 149–59. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24075-6_15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Di Caterina, Gaetano, Trushali Doshi, John J. Soraghan und Lykourgos Petropoulakis. „A Novel Decentralised System Architecture for Multi-camera Target Tracking“. In Advanced Concepts for Intelligent Vision Systems, 92–104. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48680-2_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Multi-target multi-camera tracking"

1

Specker, Andreas, Daniel Stadler, Lucas Florin und Jurgen Beyerer. „An Occlusion-aware Multi-target Multi-camera Tracking System“. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00471.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hussain, Muddsser, Rong Xie, Liang Zhang, Mehmood Nawaz und Malik Asfandyar. „Multi-target tracking identification system under multi-camera surveillance system“. In 2016 International Conference on Progress in Informatics and Computing (PIC). IEEE, 2016. http://dx.doi.org/10.1109/pic.2016.7949516.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ristani, Ergys, und Carlo Tomasi. „Features for Multi-target Multi-camera Tracking and Re-identification“. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00632.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chou, Yu-Sheng, Chien-Yao Wang, Ming-Chiao Chen, Shou-De Lin und Hong-Yuan Mark Liao. „Dynamic Gallery for Real-Time Multi-Target Multi-Camera Tracking“. In 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2019. http://dx.doi.org/10.1109/avss.2019.8909837.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

„MULTI-CAMERA DETECTION AND MULTI-TARGET TRACKING - Traffic Surveillance Applications“. In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2008. http://dx.doi.org/10.5220/0001085705850591.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Akinci, Umur, Ugur Halici und Kemal Leblebicioglu. „Single camera multi-target tracking by fuzzy target-track association“. In 2010 IEEE 18th Signal Processing and Communications Applications Conference (SIU). IEEE, 2010. http://dx.doi.org/10.1109/siu.2010.5653731.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wang, Mingkun, Dianxi Shi, Naiyang Guan, Wei Yi, Tao Zhang und Zunlin Fan. „Multi-Target Multi-Camera Tracking with Human Body Part Semantic Features“. In CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3358029.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Xindi, und Ebroul Izquierdo. „Real-Time Multi-Target Multi-Camera Tracking with Spatial-Temporal Information“. In 2019 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2019. http://dx.doi.org/10.1109/vcip47243.2019.8965845.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Li, Peng, Jiabin Zhang, Zheng Zhu, Yanwei Li, Lu Jiang und Guan Huang. „State-Aware Re-Identification Feature for Multi-Target Multi-Camera Tracking“. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2019. http://dx.doi.org/10.1109/cvprw.2019.00192.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Shim, Kyujin, Sungjoon Yoon, Kangwook Ko und Changick Kim. „Multi-Target Multi-Camera Vehicle Tracking for City-Scale Traffic Management“. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00473.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Multi-target multi-camera tracking"

1

Anderson, Robert J. Multi-target camera tracking, hand-off and display LDRD 158819 final report. Office of Scientific and Technical Information (OSTI), Oktober 2014. http://dx.doi.org/10.2172/1323373.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Anderson, Robert J. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report. Office of Scientific and Technical Information (OSTI), Oktober 2014. http://dx.doi.org/10.2172/1323947.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie