Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Multi-target multi-camera tracking.

Zeitschriftenartikel zum Thema „Multi-target multi-camera tracking“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Multi-target multi-camera tracking" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

He, Yuhang, Xing Wei, Xiaopeng Hong, Weiwei Shi und Yihong Gong. „Multi-Target Multi-Camera Tracking by Tracklet-to-Target Assignment“. IEEE Transactions on Image Processing 29 (2020): 5191–205. http://dx.doi.org/10.1109/tip.2020.2980070.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yoon, Kwangjin, Young-min Song und Moongu Jeon. „Multiple hypothesis tracking algorithm for multi-target multi-camera tracking with disjoint views“. IET Image Processing 12, Nr. 7 (01.07.2018): 1175–84. http://dx.doi.org/10.1049/iet-ipr.2017.1244.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

He, Li, Guoliang Liu, Guohui Tian, Jianhua Zhang und Ze Ji. „Efficient Multi-View Multi-Target Tracking Using a Distributed Camera Network“. IEEE Sensors Journal 20, Nr. 4 (15.02.2020): 2056–63. http://dx.doi.org/10.1109/jsen.2019.2949385.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wen, Longyin, Zhen Lei, Ming-Ching Chang, Honggang Qi und Siwei Lyu. „Multi-Camera Multi-Target Tracking with Space-Time-View Hyper-graph“. International Journal of Computer Vision 122, Nr. 2 (06.09.2016): 313–33. http://dx.doi.org/10.1007/s11263-016-0943-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Luo, Xiaohui, Fuqing Wang und Mingli Luo. „Collaborative target tracking in lopor with multi-camera“. Optik 127, Nr. 23 (Dezember 2016): 11588–98. http://dx.doi.org/10.1016/j.ijleo.2016.09.043.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Xu, Jian, Chunjuan Bo und Dong Wang. „A novel multi-target multi-camera tracking approach based on feature grouping“. Computers & Electrical Engineering 92 (Juni 2021): 107153. http://dx.doi.org/10.1016/j.compeleceng.2021.107153.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jiang, Ming Xin, Hong Yu Wang und Chao Lin. „A Multi-Object Tracking Algorithm Based on Multi-Camera“. Applied Mechanics and Materials 135-136 (Oktober 2011): 70–75. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.70.

Der volle Inhalt der Quelle
Annotation:
As a basic aspect of computer vision, reliable tracking of multiple objects is still an open and challenging issue for both theory studies and real applications. A novel multi-object tracking algorithm based on multiple cameras is proposed in this paper. We obtain the foreground likelihood maps in each view by modeling the background using the codebook algorithm. The view-to-view homographies are computed using several landmarks on the chosen plane. Then, we achieve the location information of multi-target at chest layer and realize the tracking task. The proposed algorithm does not require detecting the vanishing points of cameras, which reduces the complexity and improves the accuracy of the algorithm. The experimental results show that our method is robust to the occlusion and could satisfy the real-time tracking requirement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Castaldo, Francesco, und Francesco A. N. Palmieri. „Target tracking using factor graphs and multi-camera systems“. IEEE Transactions on Aerospace and Electronic Systems 51, Nr. 3 (Juli 2015): 1950–60. http://dx.doi.org/10.1109/taes.2015.140087.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bamrungthai, Pongsakon, und Viboon Sangveraphunsiri. „CU-Track: A Multi-Camera Framework for Real-Time Multi-Object Tracking“. Applied Mechanics and Materials 415 (September 2013): 325–32. http://dx.doi.org/10.4028/www.scientific.net/amm.415.325.

Der volle Inhalt der Quelle
Annotation:
This paper presents CU-Track, a multi-camera framework for real-time multi-object tracking. The developed framework includes a processing unit, the target object, and the multi-object tracking algorithm. A PC-cluster has been developed as the processing unit of the framework to process data in real-time. To setup the PC-cluster, two PCs are connected by using PCI interface cards that memory can be shared between the two PCs to ensure high speed data transfer and low latency. A novel mechanism for PC-to-PC communication is proposed. It is realized by a dedicated software processing module called the Cluster Module. Six processing modules have been implemented to realize system operations such as camera calibration, camera synchronization and 3D reconstruction of each target. Multiple spherical objects with the same size are used as the targets to be tracked. Two configurations of them, active and passive, can be used for tracking by the system. The algorithm based on Kalman filter and nearest neighbor searching is developed for multi-object tracking. Two applications have been implemented on the system, which confirm the validity and effectiveness of the developed framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Jian, Kuangrong Hao, Yongsheng Ding, Shiyu Yang und Lei Gao. „Multi-State Self-Learning Template Library Updating Approach for Multi-Camera Human Tracking in Complex Scenes“. International Journal of Pattern Recognition and Artificial Intelligence 31, Nr. 12 (17.09.2017): 1755016. http://dx.doi.org/10.1142/s0218001417550163.

Der volle Inhalt der Quelle
Annotation:
In multi-camera video tracking, the tracking scene and tracking-target appearance can become complex, and current tracking methods use entirely different databases and evaluation criteria. Herein, for the first time to our knowledge, we present a universally applicable template library updating approach for multi-camera human tracking called multi-state self-learning template library updating (RS-TLU), which can be applied in different multi-camera tracking algorithms. In RS-TLU, self-learning divides tracking results into three states, namely steady state, gradually changing state, and suddenly changing state, by using the similarity of objects with historical templates and instantaneous templates because every state requires a different decision strategy. Subsequently, the tracking results for each state are judged and learned with motion and occlusion information. Finally, the correct template is chosen in the robust template library. We investigate the effectiveness of the proposed method using three databases and 42 test videos, and calculate the number of false positives, false matches, and missing tracking targets. Experimental results demonstrate that, in comparison with the state-of-the-art algorithms for 15 complex scenes, our RS-TLU approach effectively improves the number of correct target templates and reduces the number of similar templates and error templates in the template library.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Li, Jing, Jing Xu, Fangwei Zhong, Xiangyu Kong, Yu Qiao und Yizhou Wang. „Pose-Assisted Multi-Camera Collaboration for Active Object Tracking“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 01 (03.04.2020): 759–66. http://dx.doi.org/10.1609/aaai.v34i01.5419.

Der volle Inhalt der Quelle
Annotation:
Active Object Tracking (AOT) is crucial to many vision-based applications, e.g., mobile robot, intelligent surveillance. However, there are a number of challenges when deploying active tracking in complex scenarios, e.g., target is frequently occluded by obstacles. In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion. To achieve effective collaboration among cameras, we propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for active object tracking. In the system, each camera is equipped with two controllers and a switcher: The vision-based controller tracks targets based on observed images. The pose-based controller moves the camera in accordance to the poses of the other cameras. At each step, the switcher decides which action to take from the two controllers according to the visibility of the target. The experimental results demonstrate that our system outperforms all the baselines and is capable of generalizing to unseen environments. The code and demo videos are available on our website https://sites.google.com/view/pose-assisted-collaboration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Liu Hean, und Pablor. „Pedestrian Target Detection and Counting Tracking Based on Multi-camera“. International Journal of Advancements in Computing Technology 5, Nr. 7 (15.04.2013): 1194–202. http://dx.doi.org/10.4156/ijact.vol5.issue7.146.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Abdullah, Hasanen S., und Sana A. Jabber. „"Faces Tracking from Multi-Surveillance Camera (FTMSC)"“. Muthanna Journal of Pure Science 4, Nr. 2 (19.10.2017): 23–32. http://dx.doi.org/10.52113/2/04.02.2017/23-32.

Der volle Inhalt der Quelle
Annotation:
"The development of a robust and integrated multi-camera surveillance device is an important requirement to ensure public safety and security. Being able to re-identify and track one or more targets in different scenes with surveillance cameras. That remains an important and difficult problem due to clogging, significant change of views, and lighting across cameras. In this paper, traditional surveillance systems developed and supported by intelligent techniques. That system have ability to performance the parallel processing of all cameras to track peoples in the different scenes (places). In addition, show information about authorized people appearing on surveillance cameras and issue a warning whistle to alert security men if any unauthorized person appears. We used Viola and Jones approach to detected face, and then classifying the target face as one of the authorized faces or not by using Local Binary Patterns (LBP)."
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hsu, Hung-Min, Jiarui Cai, Yizhou Wang, Jenq-Neng Hwang und Kwang-Ju Kim. „Multi-Target Multi-Camera Tracking of Vehicles Using Metadata-Aided Re-ID and Trajectory-Based Camera Link Model“. IEEE Transactions on Image Processing 30 (2021): 5198–210. http://dx.doi.org/10.1109/tip.2021.3078124.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Yi, Chunlei, Kunfan Zhang und Nengling Peng. „A multi-sensor fusion and object tracking algorithm for self-driving vehicles“. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 233, Nr. 9 (August 2019): 2293–300. http://dx.doi.org/10.1177/0954407019867492.

Der volle Inhalt der Quelle
Annotation:
Vehicles need to detect threats on the road, anticipate emerging dangerous driving situations and take proactive actions for collision avoidance. Therefore, the study on methods of target detection and recognition are of practical value to a self-driving system. However, single sensor has its weakness, such as poor weather adaptability with lidar and camera. In this article, we propose a novel spatial calibration method based on multi-sensor systems, and the approach utilizes rotation and translation of the coordinate system. The validity of the proposed spatial calibration method is tested through comparisons with the data calibrated. In addition, a multi-sensor fusion and object tracking algorithm based on target level to detect and recognize targets is tested. Sensors contain lidar, radar and camera. The multi-sensor fusion and object tracking algorithm takes advantages of various sensors such as target location from lidar, target velocity from radar and target type from camera. Besides, multi-sensor fusion and object tracking algorithm can achieve information redundancy and increase environmental adaptability. Compared with the results of single sensor, this new approach is verified to have the accuracy of location, velocity and recognition by real data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Amudha, J., und P. Arpita. „Multi-Camera Activation Scheme for Target Tracking with Dynamic Active Camera Group and Virtual Grid-Based Target Recovery“. Procedia Computer Science 58 (2015): 241–48. http://dx.doi.org/10.1016/j.procs.2015.08.065.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Nikodem, Maciej, Mariusz Słabicki, Tomasz Surmacz, Paweł Mrówka und Cezary Dołęga. „Multi-Camera Vehicle Tracking Using Edge Computing and Low-Power Communication“. Sensors 20, Nr. 11 (11.06.2020): 3334. http://dx.doi.org/10.3390/s20113334.

Der volle Inhalt der Quelle
Annotation:
Typical approaches to visual vehicle tracking across large area require several cameras and complex algorithms to detect, identify and track the vehicle route. Due to memory requirements, computational complexity and hardware constrains, the video images are transmitted to a dedicated workstation equipped with powerful graphic processing units. However, this requires large volumes of data to be transmitted and may raise privacy issues. This paper presents a dedicated deep learning detection and tracking algorithms that can be run directly on the camera’s embedded system. This method significantly reduces the stream of data from the cameras, reduces the required communication bandwidth and expands the range of communication technologies to use. Consequently, it allows to use short-range radio communication to transmit vehicle-related information directly between the cameras, and implement the multi-camera tracking directly in the cameras. The proposed solution includes detection and tracking algorithms, and a dedicated low-power short-range communication for multi-target multi-camera tracking systems that can be applied in parking and intersection scenarios. System components were evaluated in various scenarios including different environmental and weather conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Wang Xiaojun. „The Application of Multi-camera Multi-target Tracking System in Sports Venues Monitoring based on Intelligent Method“. International Journal of Digital Content Technology and its Applications 6, Nr. 14 (31.08.2012): 274–81. http://dx.doi.org/10.4156/jdcta.vol6.issue14.34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

KEMSARAM, Narsimlu, Venkata Rajini Kanth THATIPARTI, Devendra Rao GUNTUPALLI und Anil KUVVARAPU. „Design and development of an on-board autonomous visual tracking system for unmanned aerial vehicles“. Aviation 21, Nr. 3 (05.10.2017): 83–91. http://dx.doi.org/10.3846/16487788.2017.1378265.

Der volle Inhalt der Quelle
Annotation:
This paper proposes the design and development of an on-board autonomous visual tracking system (AVTS) for unmanned aerial vehicles (UAV). A prototype of the proposed system has been implemented in MATLAB/ Simulink for simulation purposes. The proposed system contains GPS/INS sensors, a gimbaled camera, a multi-level autonomous visual tracking algorithm, a ground stationary target (GST) or ground moving target (GMT) state estimator, a camera control algorithm, a UAV guidance algorithm, and an autopilot. The on-board multi-level autonomous visual tracking algorithm acquires the video frames from the on-board camera and calculates the GMT pixel position in the video frame. The on-board GMT state estimator receives the GMT pixel position from the multi-level autonomous visual tracking algorithm and estimates the current position and velocity of the GMT with respect to the UAV. The on-board non-linear UAV guidance law computes the UAV heading velocity rates and sends them to the autopilot to steer the UAV in the desired path. The on-board camera control law computes the control command and sends it to the camera's gimbal controller to keep the GMT in the camera's field of view. The UAV guidance law and camera control law have been integrated for continuous tracking of the GMT. The on-board autopilot is used for controlling the UAV trajectory. The simulation of the proposed system was tested with a flight simulator and the UAV's reaction to the GMT was observed. The simulated results prove that the proposed system tracks a GST or GMT effectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Chen, Yanming, Qingjie Zhao, Zhulin An, Peng Lv und Liujun Zhao. „Distributed Multi-Target Tracking Based on the K-MTSCF Algorithm in Camera Networks“. IEEE Sensors Journal 16, Nr. 13 (Juli 2016): 5481–90. http://dx.doi.org/10.1109/jsen.2016.2565263.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Guo, Xiaoxiao, Yuansheng Liu, Qixue Zhong und Mengna Chai. „Research on Moving Target Tracking Algorithm Based on Lidar and Visual Fusion“. Journal of Advanced Computational Intelligence and Intelligent Informatics 22, Nr. 5 (20.09.2018): 593–601. http://dx.doi.org/10.20965/jaciii.2018.p0593.

Der volle Inhalt der Quelle
Annotation:
Multi-sensor fusion and target tracking are two key technologies for the environmental awareness system of autonomous vehicles. In this paper, a moving target tracking method based on the fusion of Lidar and binocular camera is proposed. Firstly, the position information obtained by the two types of sensors is fused at decision level by using adaptive weighting algorithm, and then the Joint Probability Data Association (JPDA) algorithm is correlated with the result of fusion to achieve multi-target tracking. Tested at a curve in the campus and compared with the Extended Kalman Filter (EKF) algorithm, the experimental results show that this algorithm can effectively overcome the limitation of a single sensor and track more accurately.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Wang, Jiajia, und Guangming Li. „Study on Bridge Displacement Monitoring Algorithms Based on Multi-Targets Tracking“. Future Internet 12, Nr. 1 (08.01.2020): 9. http://dx.doi.org/10.3390/fi12010009.

Der volle Inhalt der Quelle
Annotation:
Bridge displacement measurement is an important area of bridge health monitoring, which can directly reflect whether the deformation of bridge structure exceeds its safety permission. Target tracking technology and Digital Image Correlation (DIC) are two fast-developing and well-known methods for non-contact bridge displacement monitoring in Digital Image Processing (DIP) methods. The former’s cost of erecting detection equipment is too large for bridges with a large span that need to locate more multi-targets because of its tracking only one target on a camera while the latter is not suitable for remote detection because it requires very high detection conditions. After investigating the evolution of bridge displacement monitoring, this paper proposes a bridge displacement monitoring algorithm based on multi-target tracking. The algorithm takes full account of practical application and realizes accuracy, robustness, real-time, low-cost, simplicity, and self-adaptability, which sufficiently adapts the bridge displacement monitoring in theory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Yan, Mi, Yuejin Zhao, Ming Liu, Lingqin Kong und Liquan Dong. „High-speed moving target tracking of multi-camera system with overlapped field of view“. Signal, Image and Video Processing 15, Nr. 7 (14.02.2021): 1369–77. http://dx.doi.org/10.1007/s11760-021-01867-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Straw, Andrew D., Kristin Branson, Titus R. Neumann und Michael H. Dickinson. „Multi-camera real-time three-dimensional tracking of multiple flying animals“. Journal of The Royal Society Interface 8, Nr. 56 (14.07.2010): 395–409. http://dx.doi.org/10.1098/rsif.2010.0230.

Der volle Inhalt der Quelle
Annotation:
Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in real time—with minimal latency—opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behaviour. Here, we describe a system capable of tracking the three-dimensional position and body orientation of animals such as flies and birds. The system operates with less than 40 ms latency and can track multiple animals simultaneously. To achieve these results, a multi-target tracking algorithm was developed based on the extended Kalman filter and the nearest neighbour standard filter data association algorithm. In one implementation, an 11-camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster . At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behaviour of freely flying animals. If combined with other techniques, such as ‘virtual reality’-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Lourakis, M., M. Pateraki, I. A. Karolos, C. Pikridas und P. Patias. „POSE ESTIMATION OF A MOVING CAMERA WITH LOW-COST, MULTI-GNSS DEVICES“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (12.08.2020): 55–62. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-55-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Without additional prior information, the pose of a camera estimated with computer vision techniques is expressed in a local coordinate frame attached to the camera’s initial location. Albeit sufficient in many cases, such an arbitrary representation is not convenient for employment in certain applications and has to be transformed to a coordinate system external to the camera before further use. Assuming a camera that is firmly mounted on a moving platform, this paper describes a method for continuously tracking the pose of that camera in a projected coordinate system. By combining exterior orientation from a known target with incremental pose changes inferred from accurate multi-GNSS positioning, the full 6 DoF pose of the camera is updated with low processing overhead and without requiring the continuous visual tracking of ground control points. Experimental results of applying the proposed method to a moving vehicle and a mobile port crane are reported, demonstrating its efficacy and potential.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

GU, YUANTAO, YILUN CHEN, ZHENGWEI JIANG und KUN TANG. „PARTICLE FILTER BASED MULTI-CAMERA INTEGRATION FOR FACE 3D-POSE TRACKING“. International Journal of Wavelets, Multiresolution and Information Processing 04, Nr. 04 (Dezember 2006): 677–90. http://dx.doi.org/10.1142/s0219691306001531.

Der volle Inhalt der Quelle
Annotation:
Face tracking has many visual applications such as human-computer interfaces, video communications and surveillance. Color-based particle trackers have been proved robust and versatile for a modest computational cost. In this paper, a probabilistic method for integrating multi-camera information is introduced to track human face 3D-pose variations. The proposed method fuses information coming from several calibrated cameras via one color-based particle filter. The algorithm relies on the following novelties. First, the human head other than face is defined as the target of our algorithm. To distinguish the face region and hair region, a dual-color-ball is utilized to model the human head in 3D space. Second, to enhance the robustness to illumination variety, the Fisher criterion is applied to measure the separability of the face region and the hair region on the color histogram. Consequently, the color distribution template can be adapted at the proper time. Finally, the algorithm is performed based on the distributed framework, therefore the computation is implemented equally by all client processors. To demonstrate the performance of the proposed algorithm, several scenarios of visual tracking are tested in an office environment with three to four calibrated cameras. Experiments show that accurate tracking results are achieved, even in some difficult scenarios, such as the complete occlusion and the temptation of anything with skin color. Furthermore, the additional information of our track results, including the head posture and the face orientation schemes, can be used for further work such as face recognition and eye gaze estimation, which is also explained by elaborated designed experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Chen, Ting, Andrea Pennisi, Zhi Li, Yanning Zhang und Hichem Sahli. „A Hierarchical Association Framework for Multi-Object Tracking in Airborne Videos“. Remote Sensing 10, Nr. 9 (23.08.2018): 1347. http://dx.doi.org/10.3390/rs10091347.

Der volle Inhalt der Quelle
Annotation:
Multi-Object Tracking (MOT) in airborne videos is a challenging problem due to the uncertain airborne vehicle motion, vibrations of the mounted camera, unreliable detections, changes of size, appearance and motion of the moving objects and occlusions caused by the interaction between moving and static objects in the scene. To deal with these problems, this work proposes a four-stage hierarchical association framework for multiple object tracking in airborne video. The proposed framework combines Data Association-based Tracking (DAT) methods and target tracking using a compressive tracking approach, to robustly track objects in complex airborne surveillance scenes. In each association stage, different sets of tracklets and detections are associated to efficiently handle local tracklet generation, local trajectory construction, global drifting tracklet correction and global fragmented tracklet linking. Experiments with challenging airborne videos show significant tracking improvement compared to existing state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Robson, Stuart, Lindsay MacDonald, Stephen Kyle, Jan Boehm und Mark Shortis. „Optimised multi-camera systems for dimensional control in factory environments“. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 232, Nr. 10 (05.08.2016): 1707–18. http://dx.doi.org/10.1177/0954405416654936.

Der volle Inhalt der Quelle
Annotation:
As part of the United Kingdom’s Light Controlled Factory project, University College London aims to develop a large-scale multi-camera system for dimensional control tasks in manufacturing, such as part assembly and tracking. Accuracy requirements in manufacturing are demanding, and improvements in the modelling and analysis of both camera imaging and the measurement environment are essential. A major aspect to improved camera modelling is the use of monochromatic imaging of retro-reflective target points, together with a camera model designed for a particular illumination wavelength. A small-scale system for laboratory testing has been constructed using eight low-cost monochrome cameras with C-mount lenses on a rigid metal framework. Red, green and blue monochromatic light-emitting diode ring illumination has been tested, with a broadband white illumination for comparison. Potentially, accuracy may be further enhanced by the reduction in refraction errors caused by a non-homogeneous factory environment, typically manifest in varying temperatures in the workspace. A refraction modelling tool under development in the parallel European Union LUMINAR project is being used to simulate refraction in order to test methods which may be able to reduce or eliminate this effect in practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Chen, Bin, Xiaofei Pei und Zhenfu Chen. „Research on Target Detection Based on Distributed Track Fusion for Intelligent Vehicles“. Sensors 20, Nr. 1 (20.12.2019): 56. http://dx.doi.org/10.3390/s20010056.

Der volle Inhalt der Quelle
Annotation:
Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor fusion technology in intelligent vehicle target detection is studied with a millimeter-wave (MMW) radar and a camera. The target level fusion hierarchy is adopted, and the fusion algorithm is divided into two tracking processing modules and one fusion center module based on the distributed structure. The measurement information output by two sensors enters the tracking processing module, and after processing by a multi-target tracking algorithm, the local tracks are generated and transmitted to the fusion center module. In the fusion center module, a two-level association structure is designed based on regional collision association and weighted track association. The association between two sensors’ local tracks is completed, and a non-reset federated filter is used to estimate the state of the fusion tracks. The experimental results indicate that the proposed algorithm can complete a tracks association between the MMW radar and camera, and the fusion track state estimation method has an excellent performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Sim, T. P., G. S. Hong und K. B. Lim. „Modified Smith Predictor with DeMenthon-Horaud pose estimation algorithm for 3D dynamic visual servoing“. Robotica 20, Nr. 6 (November 2002): 615–24. http://dx.doi.org/10.1017/s0263574702004356.

Der volle Inhalt der Quelle
Annotation:
This paper presents an attractive position-based visual servoing approach for camera-in-hand robotic systerns. The major contribution of this work is in devising an elegant and pragmatic approach for 3D visual tracking, which yielded good target tracking performance. It differs from the other known techniques, in its approach for image interpretation and the introduction of a Smith-like predictor control structure to overcome the inherent multi-rate time delayed nature of the visual servoing system. A complete description is made of the proposed MSP-DH visual servoing system. Experiments on the target tracking performance on XY planar motion using an AdeptOne robotic system are presented to illustrate the controller performance. In addition, experimental results have clearly shown the capability of the MSP-DH visual servoing system in performing 3D-dynamic visual servoing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Chen, Yanming, und Qingjie Zhao. „A Novel Square-Root Cubature Information Weighted Consensus Filter Algorithm for Multi-Target Tracking in Distributed Camera Networks“. Sensors 15, Nr. 5 (05.05.2015): 10526–46. http://dx.doi.org/10.3390/s150510526.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Guo, Ling. „Ground-Constrained Motion Target Tracking for Monocular Sequence Images“. International Journal of Pattern Recognition and Artificial Intelligence 32, Nr. 12 (27.08.2018): 1850044. http://dx.doi.org/10.1142/s0218001418500441.

Der volle Inhalt der Quelle
Annotation:
For the detection of a moving target position in video monitoring images, the existing locating tracking systems mainly adopt binocular or structured light stereoscopic technology, which has drawbacks such as system design complexity and slow detection speed. In light of these limitations, a tracking method for monocular sequence moving targets is presented, with the introduction of ground constraints into monocular visual monitoring; the principle and process of the method are introduced in detail in this paper. This method uses camera installation information and geometric imaging principles combined with nonlinear compensation to derive the calculation formula for the actual position of the ground moving target in monocular asymmetric nonlinear imaging. The footprint location of a walker is searched in the sequence imaging of a monitoring test platform that is built indoors. Because of the shadow of the walker in the image, the multi-threshold OTSU method based on test target background subtraction is used here to segment the images. The experimental results verify the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Verma, Kamlesh, Debashis Ghosh, Rajeev Marathe und Avnish Kumar. „Efficient Embedded Hardware Architecture for Stabilised Tracking Sighting System of Armoured Fighting Vehicles“. Defence Science Journal 69, Nr. 3 (30.04.2019): 208–16. http://dx.doi.org/10.14429/dsj.69.14414.

Der volle Inhalt der Quelle
Annotation:
A line-of-sight stabilised sighting system, capable of target tracking and video stabilisation is a prime requirement of any armoured fighting tank vehicle for military surveillance and weapon firing. Typically, such sighting systems have three prime electro-optical sensors i.e. day camera for viewing in day conditions, thermal camera for night viewing and eye-safe laser range finder for obtaining the target range. For laser guided missile firing, additional laser target designator may be a part of sighting system. This sighting system provides necessary parameters for the fire control computer to compute ballistic offsets to fire conventional ammunition or fire missile. System demands simultaneous interactions with electro-optical sensors, servo sensors, actuators, multi-function display for man-machine interface, fire control computer, logic controller and other sub-systems of tank. Therefore, a complex embedded electronics hardware is needed to respond in real time for such system. An efficient electronics embedded hardware architecture is presented here for the development of this type of sighting system. This hardware has been developed around SHARC 21369 processor and FPGA. A performance evaluation scheme is also presented for this sighting system based on the developed hardware.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Butail, Sachit, Nicholas Manoukis, Moussa Diallo, José M. Ribeiro, Tovi Lehmann und Derek A. Paley. „Reconstructing the flight kinematics of swarming and mating in wild mosquitoes“. Journal of The Royal Society Interface 9, Nr. 75 (23.05.2012): 2624–38. http://dx.doi.org/10.1098/rsif.2012.0150.

Der volle Inhalt der Quelle
Annotation:
We describe a novel tracking system for reconstructing three-dimensional tracks of individual mosquitoes in wild swarms and present the results of validating the system by filming swarms and mating events of the malaria mosquito Anopheles gambiae in Mali. The tracking system is designed to address noisy, low frame-rate (25 frames per second) video streams from a stereo camera system. Because flying A. gambiae move at 1–4 m s −1 , they appear as faded streaks in the images or sometimes do not appear at all. We provide an adaptive algorithm to search for missing streaks and a likelihood function that uses streak endpoints to extract velocity information. A modified multi-hypothesis tracker probabilistically addresses occlusions and a particle filter estimates the trajectories. The output of the tracking algorithm is a set of track segments with an average length of 0.6–1 s. The segments are verified and combined under human supervision to create individual tracks up to the duration of the video (90 s). We evaluate tracking performance using an established metric for multi-target tracking and validate the accuracy using independent stereo measurements of a single swarm. Three-dimensional reconstructions of A. gambiae swarming and mating events are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

He, Leping, Jie Tan, Qijun Hu, Songsheng He, Qijie Cai, Yutong Fu und Shuang Tang. „Non-Contact Measurement of the Surface Displacement of a Slope Based on a Smart Binocular Vision System“. Sensors 18, Nr. 9 (31.08.2018): 2890. http://dx.doi.org/10.3390/s18092890.

Der volle Inhalt der Quelle
Annotation:
The paper presents an intelligent real-time slope surface deformation monitoring system based on binocular stereo-vision. To adapt the system to field slope monitoring, a design scheme of concentric marking point is proposed. Techniques including Zernike moment edge extraction, the least squares method, and k-means clustering are used to design a sub-pixel precision localization method for marker images. This study is mostly focused on the tracking accuracy of objects in multi-frame images obtained from a binocular camera. For this purpose, the Upsampled Cross Correlation (UCC) sub-pixel template matching technique is employed to improve the spatial-temporal contextual (STC) target-tracking algorithm. As a result, the tracking accuracy is improved to the sub-pixel level while keeping the STC tracking algorithm at high speed. The performance of the proposed vision monitoring system has been well verified through laboratory tests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Chen, Taicong, und Zhou Zhou. „An Improved Vision Method for Robust Monitoring of Multi-Point Dynamic Displacements with Smartphones in an Interference Environment“. Sensors 20, Nr. 20 (20.10.2020): 5929. http://dx.doi.org/10.3390/s20205929.

Der volle Inhalt der Quelle
Annotation:
Current research on dynamic displacement measurement based on computer vision mostly requires professional high-speed cameras and an ideal shooting environment to ensure the performance and accuracy of the analysis. However, the high cost of the camera and strict requirements of sharp image contrast and stable environment during the shooting process limit the broad application of the technology. This paper proposes an improved vision method to implement multi-point dynamic displacement measurements with smartphones in an interference environment. A motion-enhanced spatio-temporal context (MSTC) algorithm is developed and applied together with the optical flow (OF) algorithm to realize a simultaneous tracking and dynamic displacement extraction of multiple points on a vibrating structure in the interference environment. Finally, a sine-sweep vibration experiment on a cantilever sphere model is presented to validate the feasibility of the proposed method in a wide-band frequency range. In the test, a smartphone was used to shoot the vibration process of the sine-sweep-excited sphere, and illumination change, fog interference, and camera jitter were artificially simulated to represent the interference environment. The results of the proposed method are compared to conventional displacement sensor data and current vision method results. It is demonstrated that, in an interference environment, (1) the OF method is prone to mismatch the feature points and leads to data deviated or lost; (2) the conventional STC method is sensitive to target selection and can effectively track those targets having a large proportion of pixels in the context with motion tendency similar to the target center; (3) the proposed MSTC method, however, can ease the sensitivity to target selection through in-depth processing of the information in the context and finally enhance the robustness of the target tracking. In addition, the MSTC method takes less than one second to track each target between adjacent frame images, implying a potential for online measurement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Kiswanto, Gandjar, Mohamad Safhire, Reza Afrianto und Rifkie Nurcahya. „Visual Servoing of Mobile Microrobot with Centralized Camera“. MATEC Web of Conferences 153 (2018): 02001. http://dx.doi.org/10.1051/matecconf/201815302001.

Der volle Inhalt der Quelle
Annotation:
In this paper, a mechanism of visual servoing for mobile microrobot with a centralized camera is developed. Especially for the development of swarm AI applications. In the fields of microrobots the size of robots is minimal and the amount of movement is also small. By replacing various sensors that is needed with a single centralized vision sensor we can eliminate a lot of components and the need for calibration on every robot. A study and design for a visual servoing mobile microrobot has been developed. This system can use multi object tracking and hough transform to identify the positions of the robots. And can control multiple robots at once with an accuracy of 5-6 pixel from the desired target.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Liu, Liou, Liping Wang und Zhenwen Xu. „Design and implementation of badminton robot perception and control system“. International Journal of Advanced Robotic Systems 17, Nr. 2 (01.03.2020): 172988142091260. http://dx.doi.org/10.1177/1729881420912606.

Der volle Inhalt der Quelle
Annotation:
With the rapid development of computer technology, target tracking has become an indispensable technology in the field of image processing. Outline-based matching algorithms are one of the most representative methods in the field of computer vision. The idea is to extract several characteristic vectors from the image and compares them with the characteristic vectors in the corresponding image template. The difference between the image and the template characteristic vector is calculated, and the category is determined by the minimum distance method. The badminton robot collects the depth image of the scene through the depth camera and then uses the machine vision theory to process the acquired depth image. To combine the image depth information to obtain the position of the badminton camera coordinate system in the three-dimensional space, the position of the site coordinate system is achieved. Finally, the position information of the badminton in the multi-frame images is used to predict the falling point of the badminton. The badminton positioning and the analysis of the falling point are completed. The badminton robot quickly runs to the predicted position of the badminton and completes a hitting task. To realize the high-speed continuous and smooth badminton action of the badminton robot manipulator, a new multi-objective manipulator trajectory optimization model is proposed. The experimental results show that the new trajectory optimization model can effectively reduce the energy consumption of the motor and improve the rotational efficiency, thus ensuring the response speed of the arm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Liu, Yu, und Xiaoyan Wang. „Mean Shift Fusion Color Histogram Algorithm for Nonrigid Complex Target Tracking in Sports Video“. Complexity 2021 (22.04.2021): 1–11. http://dx.doi.org/10.1155/2021/5569637.

Der volle Inhalt der Quelle
Annotation:
We analyze and study the tracking of nonrigid complex targets of sports video based on mean shift fusion color histogram algorithm. A simple and controllable 3D template generation method based on monocular video sequences is constructed, which is used as a preprocessing stage of dynamic target 3D reconstruction algorithm to achieve the construction of templates for a variety of complex objects, such as human faces and human hands, broadening the use of the reconstruction method. This stage requires video sequences of rigid moving target objects or sets of target images taken from different angles as input. First, the standard rigid body method of Visuals is used to obtain the external camera parameters of the sequence frames as well as the sparse feature point reconstruction data, and the algorithm has high accuracy and robustness. Then, a dense depth map is computed for each input image frame by the Multi-View Stereo algorithm. The depth reconstruction with a too high resolution not only increases the processing time significantly but also generates more noise, so the resolution of the depth map is controlled by parameters. The multiple hypothesis target tracking algorithms are used to track multiple targets, while the chunking feature is used to solve the problem of mutual occlusion and adhesion between targets. After finishing the matching, the target and background models are updated online separately to ensure the validity of the target and background models. Our results of nonrigid complex target tracking by mean shift fusion color histogram algorithm for sports video improve the accuracy by about 8% compared to other studies. The proposed tracking method based on the mean shift algorithm and color histogram algorithm can not only estimate the position of the target effectively but also depict the shape of the target well, which solves the problem that the nonrigid targets in sports video have complicated shapes and are not easy to track. An example is given to demonstrate the effectiveness and adaptiveness of the applied method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Jeong, Jinhan, Yook Hyun Yoon und Jahng Hyon Park. „Reliable Road Scene Interpretation Based on ITOM with the Integrated Fusion of Vehicle and Lane Tracker in Dense Traffic Situation“. Sensors 20, Nr. 9 (26.04.2020): 2457. http://dx.doi.org/10.3390/s20092457.

Der volle Inhalt der Quelle
Annotation:
Lane detection and tracking in a complex road environment is one of the most important research areas in highly automated driving systems. Studies on lane detection cover a variety of difficulties, such as shadowy situations, dimmed lane painting, and obstacles that prohibit lane feature detection. There are several hard cases in which lane candidate features are not easily extracted from image frames captured by a driving vehicle. We have carefully selected typical scenarios in which the extraction of lane candidate features can be easily corrupted by road vehicles and road markers that lead to degradations in the understanding of road scenes, resulting in difficult decision making. We have introduced two main contributions to the interpretation of road scenes in dense traffic environments. First, to obtain robust road scene understanding, we have designed a novel framework combining a lane tracker method integrated with a camera and a radar forward vehicle tracker system, which is especially useful in dense traffic situations. We have introduced an image template occupancy matching method with the integrated vehicle tracker that makes it possible to avoid extracting irrelevant lane features caused by forward target vehicles and road markers. Second, we present a robust multi-lane detection by a tracking algorithm that incudes adjacent lanes as well as ego lanes. We verify a comprehensive experimental evaluation with a real dataset comprised of problematic road scenarios. Experimental result shows that the proposed method is very reliable for multi-lane detection at the presented difficult situations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Robson, Stuart, Lindsay MacDonald, Stephen Kyle und Mark R. Shortis. „CLOSE RANGE CALIBRATION OF LONG FOCAL LENGTH LENSES IN A CHANGING ENVIRONMENT“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (15.06.2016): 115–22. http://dx.doi.org/10.5194/isprsarchives-xli-b5-115-2016.

Der volle Inhalt der Quelle
Annotation:
University College London is currently developing a large-scale multi-camera system for dimensional control tasks in manufacturing, including part machining, assembly and tracking, as part of the Light Controlled Factory project funded by the UK Engineering and Physical Science Research Council. In parallel, as part of the EU LUMINAR project funded by the European Association of National Metrology Institutes, refraction models of the atmosphere in factory environments are being developed with the intent of modelling and eliminating the effects of temperature and other variations. The accuracy requirements for both projects are extremely demanding, so accordingly improvements in the modelling of both camera imaging and the measurement environment are essential. At the junction of these two projects lies close range camera calibration. The accurate and reliable calibration of cameras across a realistic range of atmospheric conditions in the factory environment is vital in order to eliminate systematic errors. This paper demonstrates the challenge of experimentally isolating environmental effects at the level of a few tens of microns. Longer lines of sight promote the use and calibration of a near perfect perspective projection from a Kern 75mm lens with maximum radial distortion of the order of 0.5m. Coordination of a reference target array, representing a manufactured part, is achieved to better than 0.1mm at a standoff of 8m. More widely, results contribute to better sensor understanding, improved mathematical modelling of factory environments and more reliable coordination of targets to 0.1mm and better over large volumes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Robson, Stuart, Lindsay MacDonald, Stephen Kyle und Mark R. Shortis. „CLOSE RANGE CALIBRATION OF LONG FOCAL LENGTH LENSES IN A CHANGING ENVIRONMENT“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (15.06.2016): 115–22. http://dx.doi.org/10.5194/isprs-archives-xli-b5-115-2016.

Der volle Inhalt der Quelle
Annotation:
University College London is currently developing a large-scale multi-camera system for dimensional control tasks in manufacturing, including part machining, assembly and tracking, as part of the Light Controlled Factory project funded by the UK Engineering and Physical Science Research Council. In parallel, as part of the EU LUMINAR project funded by the European Association of National Metrology Institutes, refraction models of the atmosphere in factory environments are being developed with the intent of modelling and eliminating the effects of temperature and other variations. The accuracy requirements for both projects are extremely demanding, so accordingly improvements in the modelling of both camera imaging and the measurement environment are essential. At the junction of these two projects lies close range camera calibration. The accurate and reliable calibration of cameras across a realistic range of atmospheric conditions in the factory environment is vital in order to eliminate systematic errors. This paper demonstrates the challenge of experimentally isolating environmental effects at the level of a few tens of microns. Longer lines of sight promote the use and calibration of a near perfect perspective projection from a Kern 75mm lens with maximum radial distortion of the order of 0.5m. Coordination of a reference target array, representing a manufactured part, is achieved to better than 0.1mm at a standoff of 8m. More widely, results contribute to better sensor understanding, improved mathematical modelling of factory environments and more reliable coordination of targets to 0.1mm and better over large volumes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Li, Jing, Shuo Chen, Fangbing Zhang, Erkang Li, Tao Yang und Zhaoyang Lu. „An Adaptive Framework for Multi-Vehicle Ground Speed Estimation in Airborne Videos“. Remote Sensing 11, Nr. 10 (24.05.2019): 1241. http://dx.doi.org/10.3390/rs11101241.

Der volle Inhalt der Quelle
Annotation:
With the rapid development of unmanned aerial vehicles (UAVs), UAV-based intelligent airborne surveillance systems represented by real-time ground vehicle speed estimation have attracted wide attention from researchers. However, there are still many challenges in extracting speed information from UAV videos, including the dynamic moving background, small target size, complicated environment, and diverse scenes. In this paper, we propose a novel adaptive framework for multi-vehicle ground speed estimation in airborne videos. Firstly, we build a traffic dataset based on UAV. Then, we use the deep learning detection algorithm to detect the vehicle in the UAV field of view and obtain the trajectory in the image through the tracking-by-detection algorithm. Thereafter, we present a motion compensation method based on homography. This method obtains matching feature points by an optical flow method and eliminates the influence of the detected target to accurately calculate the homography matrix to determine the real motion trajectory in the current frame. Finally, vehicle speed is estimated based on the mapping relationship between the pixel distance and the actual distance. The method regards the actual size of the car as prior information and adaptively recovers the pixel scale by estimating the vehicle size in the image; it then calculates the vehicle speed. In order to evaluate the performance of the proposed system, we carry out a large number of experiments on the AirSim Simulation platform as well as real UAV aerial surveillance experiments. Through quantitative and qualitative analysis of the simulation results and real experiments, we verify that the proposed system has a unique ability to detect, track, and estimate the speed of ground vehicles simultaneously even with a single downward-looking camera. Additionally, the system can obtain effective and accurate speed estimation results, even in various complex scenes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Jiang, Shenlu, Wei Yao, Zhonghua Hong, Ling Li, Cheng Su und Tae-Yong Kuc. „A Classification-Lock Tracking Strategy Allowing a Person-Following Robot to Operate in a Complicated Indoor Environment †“. Sensors 18, Nr. 11 (12.11.2018): 3903. http://dx.doi.org/10.3390/s18113903.

Der volle Inhalt der Quelle
Annotation:
Person-following technology is an important robot service. The major trend of person-following is to utilize computer vision technology to localize the target person, due to the wide view and rich information that is obtained from the real world through a camera. However, most existing approaches employ the detecting-by-tracking strategy, which suffers from low speed, accompanied with more complicated detecting models and unstable region of interest (ROI) outputs in unexpressed situations. In this paper, we propose a novel classification-lock strategy to localize the target person, which incorporates the visual tracking technology with object detection technology, to adapt the localization model to different environments online, and to keep a high frame-per-second (FPS) on the mobile platform. This person-following approach consists of three key parts. In the first step, a pairwise cluster tracker is employed to localize the person. A positive and negative classifier is then utilized to verify the tracker’s result and to update the tracking model. In addition, a detector pre-trained by a CPU-optimized convolutional neural network is used to further improve the result of tracking. In the experiment, our approach is compared with other state-of-art approaches by a Vojir tracking dataset, with three sequences in the items of human to prove the quality of person localization. Moreover, the common challenges during the following task are evaluated by several image sequences in a static scene, and a dynamic scene is used to evaluate the improvement from the classification-lock strategy. Finally, our approach is deployed on a mobile robot to test its performance on the function of the person-following. Compared with other state-of-art methods, our approach achieves the highest score (0.91 recall rate). In the static and dynamic scene, the output of the ROI based on the classification-lock strategy is significantly better than that without it. Our approach also succeeds in a long-term following task in an indoor multi-floor scenario.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

GERMA, T., F. LERASLE und T. SIMON. „VIDEO-BASED FACE RECOGNITION AND TRACKING FROM A ROBOT COMPANION“. International Journal of Pattern Recognition and Artificial Intelligence 23, Nr. 03 (Mai 2009): 591–616. http://dx.doi.org/10.1142/s0218001409007223.

Der volle Inhalt der Quelle
Annotation:
This paper deals with video-based face recognition and tracking from a camera mounted on a mobile robot companion. All persons must be logically identified before being authorized to interact with the robot while continuous tracking is compulsory in order to estimate the person's approximate position. A first contribution relates to experiments of still-image-based face recognition methods in order to check which image projection and classifier associations give the highest performance of the face database acquired from our robot. Our approach, based on Principal Component Analysis (PCA) and Support Vector Machines (SVM) improved by genetic algorithm optimization of the free-parameters, is found to outperform conventional appearance-based holistic classifiers (eigenface and Fisherface) which are used as benchmarks. Relative performances are analyzed by means of Receiver Operator Characteristics which systematically provide optimized classifier free-parameter settings. Finally, for the SVM-based classifier, we propose a non-dominated sorting genetic algorithm to obtain optimized free-parameter settings. The second and central contribution is the design of a complete still-to-video face recognition system, dedicated to the previously identified person, which integrates face verification, as intermittent features, and shape and clothing color, as persistent cues, in a robust and probabilistically motivated way. The particle filtering framework, is well-suited to this context as it facilitates the fusion of different measurement sources. Automatic target recovery, after full occlusion or temporally disappearance from the field of view, is provided by positioning the particles according to face classification probabilities in the importance function. Moreover, the multi-cue fusion in the measurement function proves to be more reliable than any other individual cues. Evaluations on key-sequences acquired by the robot during long-term operations in crowded and continuously changing indoor environments demonstrate the robustness of the tracker against such natural settings. Mixing all these cues makes our video-based face recognition system work under a wide range of conditions encountered by the robot during its movements. The paper concludes with a discussion of possible extensions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

YAO, YI, CHUNG-HAO CHEN, BESMA ABIDI, DAVID PAGE, ANDREAS KOSCHAN und MONGI ABIDI. „MULTI-CAMERA POSITIONING FOR AUTOMATED TRACKING SYSTEMS IN DYNAMIC ENVIRONMENTS“. International Journal of Information Acquisition 07, Nr. 03 (September 2010): 225–42. http://dx.doi.org/10.1142/s0219878910002208.

Der volle Inhalt der Quelle
Annotation:
Most existing camera placement algorithms focus on coverage and/or visibility analysis, which ensures that the object of interest is visible in the camera's field of view (FOV). According to recent literature, handoff safety margin is introduced to sensor planning so that sufficient overlapped FOVs among adjacent cameras are reserved for successful and smooth target transition. In this paper, we investigate the sensor planning problem when considering the dynamic interactions between moving targets and observing cameras. The probability of camera overload is explored to model the aforementioned interactions. The introduction of the probability of camera overload also considers the limitation that a given camera can simultaneously monitor or track a fixed number of targets and incorporates the target's dynamics into sensor planning. The resulting camera placement not only achieves the optimal balance between coverage and handoff success rate but also maintains the optimal balance in environments with various target densities. The proposed camera placement method is compared with a reference algorithm by Erdem and Sclaroff. Consistently improved handoff success rate is illustrated via experiments using typical office floor plans with various target densities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Wu, Hong Qi, Xiao Bin Li, Zhan Jun Yuan und Jin Wang. „Research on Key Techniques of Multi-Sensor Intelligent Video Surveillance System“. Applied Mechanics and Materials 599-601 (August 2014): 1040–43. http://dx.doi.org/10.4028/www.scientific.net/amm.599-601.1040.

Der volle Inhalt der Quelle
Annotation:
In order to improve the traceability and the tracking precision of fast moving targets on its own, a intelligent video monitoring solution is proposed. Sound source localization technology was adopted to realize quick positioning for dynamic target. Three-frame-difference method was given to detect the moving object , then the obtained target center parameters were used as the control signals of PTZ control system to drive the camera to trace the moving target for real-time tracking and monitoring. Experiments and simulations show that the control schemes can completely satisfy the requirement of monitoring.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

You, Sisi, Hantao Yao und Changsheng Xu. „Multi-Target Multi-Camera Tracking with Optical-based Pose Association“. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 1. http://dx.doi.org/10.1109/tcsvt.2020.3036467.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

„Multi Target Tracking Access with Data Association in Distributed Camera Networks“. International Journal of Recent Technology and Engineering 8, Nr. 2S11 (02.11.2019): 412–17. http://dx.doi.org/10.35940/ijrte.b1063.0982s1119.

Der volle Inhalt der Quelle
Annotation:
Data Association in Distributed Camera Network is a new method to analyse the large volume of video information in camera networking. It is an important step in multi camera multi target tracking. Distributed processing is a new paradigm to analyse the videos in camera network and each camera acts on its own and all cameras cooperatively work together to achieve a common goal, In this paper, we have addresses the problem of Distributed Data Association(DDA) to obtain the feet position of the object. These positions are shared with its immediate neighbours and find local matches using homography. By propagating these local matches across the network in order to obtain the global associations. In this proposed method DDA is less complex and improves the high accuracy compared to the centralized methods (STSPIE, EMTIC, JPDAEKCF, CSPIF, and CEIF).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

„Distinctly Trained Multi-Source CNN for Multi Camera based Vehicle Tracking System“. International Journal of Recent Technology and Engineering 8, Nr. 2 (30.07.2019): 624–34. http://dx.doi.org/10.35940/ijrte.b1639.078219.

Der volle Inhalt der Quelle
Annotation:
In the last few years the exponential rise in the demands or robust surveillance systems have revitalized academia-industries to achieve more efficient vision based computing systems. Vision based computing methods have been found potential for the different surveillance purposes such as Intelligent Transport System (ITS), civil surveillance, defense and other public-private establishment security. However, computational complexities turn-out to be more complicate for ITS under occlusion where multiple cameras could be synchronized together to track certain target vehicle. Classical texture, color based approaches are confined and often leads false positive outcome thus impacting decision making efficiency. Considering this as motivation, in this paper a highly robust and novel Distinctly Trained Multi-Source Convolutional Neural Network (DCNN) has been developed that exhibits pre-training of the real-time traffic videos from multiple cameras to track certain targeted vehicle. Our proposed DCNNvehicle tracking model encompasses multiple shared layers with multiple branchesof the source-specific layers. In other words, DCNN is implemented on each camera or source where it performs feature learning and enables a set of features shared by each camera, which is then learnt to identify Region of Interest (ROI) signifying the “targeted vehicle”. Our proposed DCNNmodel trains each source input iteratively to achieve ROI representations in the shared layers. To perform tracking in a new sequence, DCNNforms a new network by combining the shared layers in the pre-trained DCNN with a new binary classification layer, which is updated online. This process enables online tracking by retrieving the ROI windows arbitrarily sampled near the previous ROI state. It helps achieving real-time vehicle tracking even under occlusion and dynamic background conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie