Auswahl der wissenschaftlichen Literatur zum Thema „Multi-Camera network“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multi-Camera network" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Multi-Camera network"

1

Wu, Yi-Chang, Ching-Han Chen, Yao-Te Chiu und Pi-Wei Chen. „Cooperative People Tracking by Distributed Cameras Network“. Electronics 10, Nr. 15 (25.07.2021): 1780. http://dx.doi.org/10.3390/electronics10151780.

Der volle Inhalt der Quelle
Annotation:
In the application of video surveillance, reliable people detection and tracking are always challenging tasks. The conventional single-camera surveillance system may encounter difficulties such as narrow-angle of view and dead space. In this paper, we proposed multi-cameras network architecture with an inter-camera hand-off protocol for cooperative people tracking. We use the YOLO model to detect multiple people in the video scene and incorporate the particle swarm optimization algorithm to track the person movement. When a person leaves the area covered by a camera and enters an area covered by another camera, these cameras can exchange relevant information for uninterrupted tracking. The motion smoothness (MS) metrics is proposed for evaluating the tracking quality of multi-camera networking system. We used a three-camera system for two persons tracking in overlapping scene for experimental evaluation. Most tracking person offsets at different frames were lower than 30 pixels. Only 0.15% of the frames showed abrupt increases in offsets pixel. The experiment results reveal that our multi-camera system achieves robust, smooth tracking performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

R.Kennady, Et al. „A Nonoverlapping Vision Field Multi-Camera Network for Tracking Human Build Targets“. International Journal on Recent and Innovation Trends in Computing and Communication 11, Nr. 3 (31.03.2023): 366–69. http://dx.doi.org/10.17762/ijritcc.v11i3.9871.

Der volle Inhalt der Quelle
Annotation:
This research presents a procedure for tracking human build targets in a multi-camera network with nonoverlapping vision fields. The proposed approach consists of three main steps: single-camera target detection, single-camera target tracking, and multi-camera target association and continuous tracking. The multi-camera target association includes target characteristic extraction and the establishment of topological relations. Target characteristics are extracted based on the HSV (Hue, Saturation, and Value) values of each human build movement target, and the space-time topological relations of the multi-camera network are established using the obtained target associations. This procedure enables the continuous tracking of human build movement targets in large scenes, overcoming the limitations of monitoring within the narrow field of view of a single camera.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhao, Guoliang, Yuxun Zhou, Zhanbo Xu, Yadong Zhou und Jiang Wu. „Hierarchical Multi-Supervision Multi-Interaction Graph Attention Network for Multi-Camera Pedestrian Trajectory Prediction“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 4 (28.06.2022): 4698–706. http://dx.doi.org/10.1609/aaai.v36i4.20395.

Der volle Inhalt der Quelle
Annotation:
Pedestrian trajectory prediction has become an essential underpinning in various human-centric applications including but not limited to autonomous vehicles, intelligent surveillance system and social robotics. Previous research endeavors mainly focus on single camera trajectory prediction (SCTP), while the problem of multi-camera trajectory prediction (MCTP) is often overly simplified into predicting presence in the next camera. This paper addresses MCTP from a more realistic yet challenging perspective, by redefining the task as a joint estimation of both future destination and possible trajectory. As such, two major efforts are devoted to facilitating related research and advancing modeling techniques. Firstly, we establish a comprehensive multi-camera Scenes Pedestrian Trajectory Dataset (mcScenes), which is collected from a real-world multi-camera space combined with thorough human interaction annotations and carefully designed evaluation metrics. Secondly, we propose a novel joint prediction framework, namely HM3GAT, for the MCTP task by building a tailored network architecture. The core idea behind HM3GAT is a fusion of topological and trajectory information that are mutually beneficial to the prediction of each task, achieved by deeply customized networks. The proposed framework is comprehensively evaluated on the mcScenes dataset with multiple ablation experiments. Status-of-the-art SCTP models are adopted as baselines to further validate the advantages of our method in terms of both information fusion and technical improvement. The mcScenes dataset, the HM3GAT, and alternative models are made publicly available for interested readers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sharma, Anil, Saket Anand und Sanjit K. Kaul. „Reinforcement Learning Based Querying in Camera Networks for Efficient Target Tracking“. Proceedings of the International Conference on Automated Planning and Scheduling 29 (25.05.2021): 555–63. http://dx.doi.org/10.1609/icaps.v29i1.3522.

Der volle Inhalt der Quelle
Annotation:
Surveillance camera networks are a useful monitoring infrastructure that can be used for various visual analytics applications, where high-level inferences and predictions could be made based on target tracking across the network. Most multi-camera tracking works focus on re-identification problems and trajectory association problems. However, as camera networks grow in size, the volume of data generated is humongous, and scalable processing of this data is imperative for deploying practical solutions. In this paper, we address the largely overlooked problem of scheduling cameras for processing by selecting one where the target is most likely to appear next. The inter-camera handover can then be performed on the selected cameras via re-identification or another target association technique. We model this scheduling problem using reinforcement learning and learn the camera selection policy using Q-learning. We do not assume the knowledge of the camera network topology but we observe that the resulting policy implicitly learns it. We evaluate our approach using NLPR MCT dataset, which is a real multi-camera multi-target tracking benchmark and show that the proposed policy substantially reduces the number of frames required to be processed at the cost of a small reduction in recall.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Li, Xiaolin, Wenhui Dong, Faliang Chang und Peishu Qu. „Topology Learning of Non-overlapping Multi-camera Network“. International Journal of Signal Processing, Image Processing and Pattern Recognition 8, Nr. 11 (30.11.2015): 243–54. http://dx.doi.org/10.14257/ijsip.2015.8.11.22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Xin, Herman G. J. Groot, Egor Bondarev und Peter H. N. de With. „Introducing Scene Understanding to Person Re-Identification using a Spatio-Temporal Multi-Camera Model“. Electronic Imaging 2020, Nr. 10 (26.01.2020): 95–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.10.ipas-095.

Der volle Inhalt der Quelle
Annotation:
In this paper, we investigate person re-identification (re-ID) in a multi-camera network for surveillance applications. To this end, we create a Spatio-Temporal Multi-Camera model (ST-MC model), which exploits statistical data on a person’s entry/exit points in the multi-camera network, to predict in which camera view a person will re-appear. The created ST-MC model is used as a novel extension to the Multiple Granularity Network (MGN) [1], which is the current state of the art in person re-ID. Compared to existing approaches that are solely based on Convolutional Neural Networks (CNNs), our approach helps to improve the re-ID performance by considering not only appearance-based features of a person from a CNN, but also contextual information. The latter serves as scene understanding information complimentary to person re-ID. Experimental results show that for the DukeMTMC-reID dataset [2][3], introduction of our ST-MC model substantially increases the mean Average Precision (mAP) and Rank-1 score from 77.2% to 84.1%, and from 88.6% to 96.2%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

He, Li, Guoliang Liu, Guohui Tian, Jianhua Zhang und Ze Ji. „Efficient Multi-View Multi-Target Tracking Using a Distributed Camera Network“. IEEE Sensors Journal 20, Nr. 4 (15.02.2020): 2056–63. http://dx.doi.org/10.1109/jsen.2019.2949385.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Li, Yun-Lun, Hao-Ting Li und Chen-Kuo Chiang. „Multi-Camera Vehicle Tracking Based on Deep Tracklet Similarity Network“. Electronics 11, Nr. 7 (24.03.2022): 1008. http://dx.doi.org/10.3390/electronics11071008.

Der volle Inhalt der Quelle
Annotation:
Multi-camera vehicle tracking at the city scale has received lots of attention in the last few years. It has large-scale differences, frequent occlusion, and appearance differences caused by the viewing angle differences, which is quite challenging. In this research, we propose the Tracklet Similarity Network (TSN) for a multi-target multi-camera (MTMC) vehicle tracking system based on the evaluation of the similarity between vehicle tracklets. In addition, a novel component, Candidates Intersection Ratio (CIR), is proposed to refine the similarity. It provides an associate scheme to build the multi-camera tracking results as a tree structure. Based on these components, an end-to-end vehicle tracking system is proposed. The experimental results demonstrate that an 11% improvement on the evaluation score is obtained compared to the conventional similarity baseline.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Truong, Philips, Deligiannis, Abrahamyan und Guan. „Automatic Multi-Camera Extrinsic Parameter Calibration Based on Pedestrian Torsors †“. Sensors 19, Nr. 22 (15.11.2019): 4989. http://dx.doi.org/10.3390/s19224989.

Der volle Inhalt der Quelle
Annotation:
Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sumathy, R. „Face Recognition in Multi Camera Network with Sh Feature“. International Journal of Modern Education and Computer Science 7, Nr. 5 (08.05.2015): 59–64. http://dx.doi.org/10.5815/ijmecs.2015.05.08.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Multi-Camera network"

1

Zhao, Jian. „Camera Planning and Fusion in a Heterogeneous Camera Network“. UKnowledge, 2011. http://uknowledge.uky.edu/ece_etds/2.

Der volle Inhalt der Quelle
Annotation:
Wide-area camera networks are becoming more and more common. They have widerange of commercial and military applications from video surveillance to smart home and from traffic monitoring to anti-terrorism. The design of such a camera network is a challenging problem due to the complexity of the environment, self and mutual occlusion of moving objects, diverse sensor properties and a myriad of performance metrics for different applications. In this dissertation, we consider two such challenges: camera planing and camera fusion. Camera planning is to determine the optimal number and placement of cameras for a target cost function. Camera fusion describes the task of combining images collected by heterogenous cameras in the network to extract information pertinent to a target application. I tackle the camera planning problem by developing a new unified framework based on binary integer programming (BIP) to relate the network design parameters and the performance goals of a variety of camera network tasks. Most of the BIP formulations are NP hard problems and various approximate algorithms have been proposed in the literature. In this dissertation, I develop a comprehensive framework in comparing the entire spectrum of approximation algorithms from Greedy, Markov Chain Monte Carlo (MCMC) to various relaxation techniques. The key contribution is to provide not only a generic formulation of the camera planning problem but also novel approaches to adapt the formulation to powerful approximation schemes including Simulated Annealing (SA) and Semi-Definite Program (SDP). The accuracy, efficiency and scalability of each technique are analyzed and compared in depth. Extensive experimental results are provided to illustrate the strength and weakness of each method. The second problem of heterogeneous camera fusion is a very complex problem. Information can be fused at different levels from pixel or voxel to semantic objects, with large variation in accuracy, communication and computation costs. My focus is on the geometric transformation of shapes between objects observed at different camera planes. This so-called the geometric fusion approach usually provides the most reliable fusion approach at the expense of high computation and communication costs. To tackle the complexity, a hierarchy of camera models with different levels of complexity was proposed to balance the effectiveness and efficiency of the camera network operation. Then different calibration and registration methods are proposed for each camera model. At last, I provide two specific examples to demonstrate the effectiveness of the model: 1)a fusion system to improve the segmentation of human body in a camera network consisted of thermal and regular visible light cameras and 2) a view dependent rendering system by combining the information from depth and regular cameras to collecting the scene information and generating new views in real time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Guillén, Alejandro. „Implementation of a Distributed Algorithm for Multi-camera Visual Feature Extraction in a Visual Sensor Network Testbed“. Thesis, KTH, Kommunikationsnät, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167415.

Der volle Inhalt der Quelle
Annotation:
Visual analysis tasks, like detection, recognition and tracking, are com- putationally intensive, and it is therefore challenging to perform such tasks in visual sensor networks, where nodes may be equipped with low power CPUs. A promising solution is to augment the sensor network with pro- cessing nodes, and to distribute the processing tasks among the process- ing nodes of the visual sensor network. The objective of this project is to enable a visual sensor network testbed to operate with multiple cam- era sensors, and to implement an algorithm that computes the allocation of the visual feature tasks to the processing nodes. In the implemented system, the processing nodes can receive and process data from differ- ent camera sensors simultaneously. The acquired images are divided into sub-images, the sizes of the sub-images are computed through solving a linear programming problem. The implemented algorithm performs local optimization in each camera sensor without data exchange with the other cameras in order to minimize the communication overhead and the data computational load of the camera sensors. The implementation work is performed on a testbed that consists of BeagleBone Black computers with IEEE 802.15.4 or IEEE 802.11 USB modules, and the existing code base is written in C++. The implementation is used to assess the performance of the distributed algorithm in terms of completion time. The results show a good performance providing lower average completion time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Jeong, Kideog. „OBJECT MATCHING IN DISJOINT CAMERAS USING A COLOR TRANSFER APPROACH“. UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/434.

Der volle Inhalt der Quelle
Annotation:
Object appearance models are a consequence of illumination, viewing direction, camera intrinsics, and other conditions that are specific to a particular camera. As a result, a model acquired in one view is often inappropriate for use in other viewpoints. In this work we treat this appearance model distortion between two non-overlapping cameras as one in which some unknown color transfer function warps a known appearance model from one view to another. We demonstrate how to recover this function in the case where the distortion function is approximated as general affine and object appearance is represented as a mixture of Gaussians. Appearance models are brought into correspondence by searching for a bijection function that best minimizes an entropic metric for model dissimilarity. These correspondences lead to a solution for the transfer function that brings the parameters of the models into alignment in the UV chromaticity plane. Finally, a set of these transfer functions acquired from a collection of object pairs are generalized to a single camera-pair-specific transfer function via robust fitting. We demonstrate the method in the context of a video surveillance network and show that recognition of subjects in disjoint views can be significantly improved using the new color transfer approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Macknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chen, Huiqin. „Registration of egocentric views for collaborative localization in security applications“. Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG031.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s’intéresse à la localisation collaborative à partir d’une caméra mobile et d’une caméra statique pour des applications de vidéo-surveillance. Pour la surveillance d’évènements sensibles, la sécurité civile recours de plus en plus à des réseaux de caméras collaboratives incluant des caméras dynamiques et des caméras de surveillance traditionnelles, statiques. Il s’agit, dans des scènes de foules, de localiser d’une part le porteur de la caméra (typiquement agent de sécurité) mais également des évènements observés dans les images, afin de guider les secours par exemple. Cependant, les différences de point de vue entre la caméra mobile située au niveau du sol, et la caméra de vidéo-surveillance située en hauteur, couplées à la présence de motifs répétitifs et d’occlusions rendent les tâches de calibration et de localisation ardue. Nous nous sommes d’abord intéressés à la façon dont des informations issues de capteurs de localisation et d’orientation (GPS-IMU) bas-coût, pouvaient contribuer à raffiner l’estimation de la pose relative entre les caméras. Nous avons ensuite proposé de localiser la caméra mobile par la localisation de son épipole dans l’image de la caméra statique. Pour rendre robuste cette estimation vis-à-vis de la présence d’outliers en termes d’appariements de points clés, nous avons développé deux algorithmes. Le premier est basé sur une approche cumulative pour construire la carte d’incertitude de l’épipole. Le second, qui exploite de cadre de la théorie des fonctions de croyance et de son extension aux cadres de discernement 2D, nous a permis de proposer une contribution à la gestion d’un grand nombre de sources élémentaires, dont certaines incompatibles, basée sur un clustering des fonctions de croyances, particulièrement intéressant en vue de la combinaison avec d’autres sources (comme les détecteurs de piétons et/ou données GPS pour notre application). Enfin, la dernière partie concernant la géolocalisation des individus dans la scène, nous a conduit à étudier le problème de l’association de données entre les vues. Nous avons proposé d’utiliser des descripteurs et contraintes géométriques, en plus des descripteurs d’apparence classiques, dans la fonction de coût d’association. Nous avons montré la pertinence de ces informations géométriques qu’elles soient explicites, ou apprises à l’aide un réseau de neurones
This work focuses on collaborative localization between a mobile camera and a static camera for video surveillance. In crowd scenes and sensitive events, surveillance involves locating the wearer of the camera (typically a security officer) and also the events observed in the images (e.g., to guide emergency services). However, the different points of view between the mobile camera (at ground level), and the video surveillance camera (located high up), along with repetitive patterns and occlusions make difficult the tasks of relative calibration and localization. We first studied how low-cost positioning and orientation sensors (GPS-IMU) could help refining the estimate of relative pose between cameras. We then proposed to locate the mobile camera using its epipole in the image of the static camera. To make this estimate robust with respect to outlier keypoint matches, we developed two algorithms: either based on a cumulative approach to derive an uncertainty map, or exploiting the belief function framework. Facing with the issue of a large number of elementary sources, some of which are incompatible, we provide a solution based on a belief clustering, in the perspective of further combination with other sources (such as pedestrian detectors and/or GPS data for our application). Finally, the individual location in the scene led us to the problem of data association between views. We proposed to use geometric descriptors/constraints, in addition to the usual appearance descriptors. We showed the relevance of this geometric information whether it is explicit, or learned using a neural network
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Konda, Krishna Reddy. „Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks“. Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367752.

Der volle Inhalt der Quelle
Annotation:
The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the current needs, by choosing the type and number of sensors, as well as by adjusting the different camera parameters, as field-of-view, resolution and in case of active PTZ cameras pan,tilt and zoom. Further there is also a possibility of event driven automatic realignment of camera network to better observe the occurring event. Given the above mentioned possibilities, there are two objectives of this doctoral study. First objective consists of proposal of a state of the art camera placement and static reconfiguration algorithm and secondly we present a distributive, co-operative and dynamic camera reconfiguration algorithm for a network of cameras. Camera placement and user driven reconfiguration algorithm is based realistic virtual modelling of a given environment using particle swarm optimization. A real time camera reconfiguration algorithm which relies on motion entropy metric extracted from the H.264 compressed stream acquired by the camera is also presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Konda, Krishna Reddy. „Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks“. Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1386/1/PhD-Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the current needs, by choosing the type and number of sensors, as well as by adjusting the different camera parameters, as field-of-view, resolution and in case of active PTZ cameras pan,tilt and zoom. Further there is also a possibility of event driven automatic realignment of camera network to better observe the occurring event. Given the above mentioned possibilities, there are two objectives of this doctoral study. First objective consists of proposal of a state of the art camera placement and static reconfiguration algorithm and secondly we present a distributive, co-operative and dynamic camera reconfiguration algorithm for a network of cameras. Camera placement and user driven reconfiguration algorithm is based realistic virtual modelling of a given environment using particle swarm optimization. A real time camera reconfiguration algorithm which relies on motion entropy metric extracted from the H.264 compressed stream acquired by the camera is also presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Dziri, Aziz. „Suivi visuel d'objets dans un réseau de caméras intelligentes embarquées“. Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22610/document.

Der volle Inhalt der Quelle
Annotation:
Le suivi d’objets est de plus en plus utilisé dans les applications de vision par ordinateur. Compte tenu des exigences des applications en termes de performance, du temps de traitement, de la consommation d’énergie et de la facilité du déploiement des systèmes de suivi, l’utilisation des architectures embarquées de calcul devient primordiale. Dans cette thèse, nous avons conçu un système de suivi d’objets pouvant fonctionner en temps réel sur une caméra intelligente de faible coût et de faible consommation équipée d’un processeur embarqué ayant une architecture légère en ressources de calcul. Le système a été étendu pour le suivi d’objets dans un réseau de caméras avec des champs de vision non-recouvrant. La chaîne algorithmique est composée d’un étage de détection basé sur la soustraction de fond et d’un étage de suivi utilisant un algorithme probabiliste Gaussian Mixture Probability Hypothesis Density (GMPHD). La méthode de soustraction de fond que nous avons proposée combine le résultat fournie par la méthode Zipfian Sigma-Delta avec l’information du gradient de l’image d’entrée dans le but d’assurer une bonne détection avec une faible complexité. Le résultat de soustraction est traité par un algorithme d’analyse des composantes connectées afin d’extraire les caractéristiques des objets en mouvement. Les caractéristiques constituent les observations d’une version améliorée du filtre GMPHD. En effet, le filtre GMPHD original ne traite pas les occultations se produisant entre les objets. Nous avons donc intégré deux modules dans le filtre GMPHD pour la gestion des occultations. Quand aucune occultation n’est détectée, les caractéristiques de mouvement des objets sont utilisées pour le suivi. Dans le cas d’une occultation, les caractéristiques d’apparence des objets, représentées par des histogrammes en niveau de gris sont sauvegardées et utilisées pour la ré-identification à la fin de l’occultation. Par la suite, la chaîne de suivi développée a été optimisée et implémentée sur une caméra intelligente embarquée composée de la carte Raspberry Pi version 1 et du module caméra RaspiCam. Les résultats obtenus montrent une qualité de suivi proche des méthodes de l’état de l’art et une cadence d’images de 15 − 30 fps sur la caméra intelligente selon la résolution des images. Dans la deuxième partie de la thèse, nous avons conçu un système distribué de suivi multi-objet pour un réseau de caméras avec des champs non recouvrants. Le système prend en considération que chaque caméra exécute un filtre GMPHD. Le système est basé sur une approche probabiliste qui modélise la correspondance entre les objets par une probabilité d’apparence et une probabilité spatio-temporelle. L’apparence d’un objet est représentée par un vecteur de m éléments qui peut être considéré comme un histogramme. La caractéristique spatio-temporelle est représentée par le temps de transition des objets et la probabilité de transition d’un objet d’une région d’entrée-sortie à une autre. Le temps de transition est modélisé par une loi normale dont la moyenne et la variance sont supposées être connues. L’aspect distribué de l’approche proposée assure un suivi avec peu de communication entre les noeuds du réseau. L’approche a été testée en simulation et sa complexité a été analysée. Les résultats obtenus sont prometteurs pour le fonctionnement de l’approche dans un réseau de caméras intelligentes réel
Multi-object tracking constitutes a major step in several computer vision applications. The requirements of these applications in terms of performance, processing time, energy consumption and the ease of deployment of a visual tracking system, make the use of low power embedded platforms essential. In this thesis, we designed a multi-object tracking system that achieves real time processing on a low cost and a low power embedded smart camera. The tracking pipeline was extended to work in a network of cameras with nonoverlapping field of views. The tracking pipeline is composed of a detection module based on a background subtraction method and on a tracker using the probabilistic Gaussian Mixture Probability Hypothesis Density (GMPHD) filter. The background subtraction, we developed, is a combination of the segmentation resulted from the Zipfian Sigma-Delta method with the gradient of the input image. This combination allows reliable detection with low computing complexity. The output of the background subtraction is processed using a connected components analysis algorithm to extract the features of moving objects. The features are used as input to an improved version of GMPHD filter. Indeed, the original GMPHD do not manage occlusion problems. We integrated two new modules in GMPHD filter to handle occlusions between objects. If there are no occlusions, the motion feature of objects is used for tracking. When an occlusion is detected, the appearance features of the objects are saved to be used for re-identification at the end of the occlusion. The proposed tracking pipeline was optimized and implemented on an embedded smart camera composed of the Raspberry Pi version 1 board and the camera module RaspiCam. The results show that besides the low complexity of the pipeline, the tracking quality of our method is close to the stat of the art methods. A frame rate of 15 − 30 was achieved on the smart camera depending on the image resolution. In the second part of the thesis, we designed a distributed approach for multi-object tracking in a network of non-overlapping cameras. The approach was developed based on the fact that each camera in the network runs a GMPHD filter as a tracker. Our approach is based on a probabilistic formulation that models the correspondences between objects as an appearance probability and space-time probability. The appearance of an object is represented by a vector of m dimension, which can be considered as a histogram. The space-time features are represented by the transition time between two input-output regions in the network and the transition probability from a region to another. Transition time is modeled as a Gaussian distribution with known mean and covariance. The distributed aspect of the proposed approach allows a tracking over the network with few communications between the cameras. Several simulations were performed to validate the approach. The obtained results are promising for the use of this approach in a real network of smart cameras
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tahir, Syed Fahad. „Resource-constrained re-identification in camera networks“. Thesis, Queen Mary, University of London, 2016. http://qmro.qmul.ac.uk/xmlui/handle/123456789/36123.

Der volle Inhalt der Quelle
Annotation:
In multi-camera surveillance, association of people detected in different camera views over time, known as person re-identification, is a fundamental task. Re-identification is a challenging problem because of changes in the appearance of people under varying camera conditions. Existing approaches focus on improving the re-identification accuracy, while no specific effort has yet been put into efficiently utilising the available resources that are normally limited in a camera network, such as storage, computation and communication capabilities. In this thesis, we aim to perform and improve the task of re-identification under constrained resources. More specifically, we reduce the data needed to represent the appearance of an object through a proposed feature selection method and a difference-vector representation method. The proposed feature-selection method considers the computational cost of feature extraction and the cost of storing the feature descriptor jointly with the feature's re-identification performance to select the most cost-effective and well-performing features. This selection allows us to improve inter-camera re-identification while reducing storage and computation requirements within each camera. The selected features are ranked in the order of effectiveness, which enable a further reduction by dropping the least effective features when application constraints require this conformity. We also reduce the communication overhead in the camera network by transferring only a difference vector, obtained from the extracted features of an object and the reference features within a camera, as an object representation for the association. In order to reduce the number of possible matches per association, we group the objects appearing within a defined time-interval in un-calibrated camera pairs. Such a grouping improves the re-identification, since only those objects that appear within the same time-interval in a camera pair are needed to be associated. For temporal alignment of cameras, we exploit differences between the frame numbers of the detected objects in a camera pair. Finally, in contrast to pairwise camera associations used in literature, we propose a many-to-one camera association method for re-identification, where multiple cameras can be candidates for having generated the previous detections of an object. We obtain camera-invariant matching scores from the scores obtained using the pairwise re-identification approaches. These scores measure the chances of a correct match between the objects detected in a group of cameras. Experimental results on publicly available and in-lab multi-camera image and video datasets show that the proposed methods successfully reduce storage, computation and communication requirements while improving the re-identification rate compared to existing re-identification approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yildiz, Enes. „PROVIDING MULTI-PERSPECTIVE COVERAGE IN WIRELESS MULTIMEDIA SENSOR NETWORKS“. OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/717.

Der volle Inhalt der Quelle
Annotation:
Deployment of cameras in Wireless Multimedia Sensor Networks (WMSNs) is crucial in achieving good coverage, accuracy and fault tolerance. With the decreased costs of wireless cameras, WMSNs provide opportunities for redundant camera deployment in order to get multiple disparate views of events. Referred to as multi-perspective coverage (MPC), this thesis proposes an optimal solution for camera deployment that can achieve full MPC for a given region. The solution is based on a Bi-Level mixed integer program (MIP) which works by solving two sub-problems named master and sub-problems. The master problem identifies a solution based on an initial set of points and then calls the sub-problem to cover the uncovered points iteratively. The Bi-Level algorithm is then revised to provide MPC with the minimum cost in Heteregeneous Visual Sensor Networks (VSNs) where cameras may have different price, resolution, Field-of-View (FoV) and Depth-of-Field (DoF). For a given average resolution, area, and variety of camera sensors, we propose a deployment algorithm which minimizes the total cost while guaranteeing 100\% MPC of the area and a minimum resolution. Furthermore, revised Bi-level algorithm provides the flexibility of achieving required resolution on sub-regions for a given region. The numerical results show the superiority of our approach with respect to existing approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Multi-Camera network"

1

K, Aghajan Hamid, Cavallaro Andrea und ScienceDirect (Online service), Hrsg. Multi-camera networks: Principles and applications. Amsterdam: Elsevier, AP, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Multi-Camera Networks. Elsevier, 2009. http://dx.doi.org/10.1016/b978-0-12-374633-7.x0001-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cavallaro, Andrea, und Hamid Aghajan. Multi-Camera Networks: Principles and Applications. Elsevier Science & Technology Books, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Beach, David Michael. Multi-camera benchmark localization for mobile robot networks. 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Beach, David Michael. Multi-camera benchmark localization for mobile robot networks. 2005.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Multi-Camera network"

1

Koyama, Takashi, und Yusuke Gotoh. „Multi-camera Live Video Streaming over Wireless Network“. In Advances in Mobile Computing and Multimedia Intelligence, 144–58. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48348-6_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xing, Chang, Sichen Bai, Yi Zhou, Zhong Zhou und Wei Wu. „Coarse-to-Fine Multi-camera Network Topology Estimation“. In Advances in Multimedia Information Processing – PCM 2017, 981–90. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77383-4_96.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bo Bo, Nyan, Maarten Slembrouck, Peter Veelaert und Wilfried Philips. „Distributed Multi-class Road User Tracking in Multi-camera Network For Smart Traffic Applications“. In Advanced Concepts for Intelligent Vision Systems, 517–28. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40605-9_44.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mavrinac, Aaron, Jose Luis Alarcon Herrera und Xiang Chen. „Evaluating the Fuzzy Coverage Model for 3D Multi-camera Network Applications“. In Intelligent Robotics and Applications, 692–701. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16584-9_66.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Choudhary, Ayesha, Shubham Sharma, Indu Sreedevi und Santanu Chaudhury. „Real-Time Distributed Multi-object Tracking in a PTZ Camera Network“. In Lecture Notes in Computer Science, 183–92. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19941-2_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chen, Xiaotang, Kaiqi Huang und Tieniu Tan. „Learning the Three Factors of a Non-overlapping Multi-camera Network Topology“. In Communications in Computer and Information Science, 104–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33506-8_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Shang, Yang, Qifeng Yu, Yong Xu, Guangwen Jiang, Xiaolin Liu, Sihua Fu, Xianwei Zhu und Xiaochun Liu. „An Innovative Multi-headed Camera Network: A Displacement-Relay Videometrics Method in Unstable Areas“. In Fringe 2013, 871–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-36359-7_161.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tran, Quang Khoi, Khanh Hieu Ngo, Anh Huy Le Dinh, Lu Tien Truong, Hai Quan Tran und Anh Tuan Trinh. „Development of a Real-Time Obstacle Detection System on Embedded Computer Based on Neural Network and Multi-camera Streaming“. In Intelligent Systems and Networks, 298–308. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3394-3_35.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Çaldıran, Bekir Eren, und Tankut Acarman. „Multi-network for Joint Detection of Dynamic and Static Objects in a Road Scene Captured by an RGB Camera“. In Lecture Notes in Networks and Systems, 837–51. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4960-9_63.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sankaranarayanan, Aswin C., Rama Chellappa und Richard G. Baraniuk. „Distributed Sensing and Processing for Multi-Camera Networks“. In Distributed Video Sensor Networks, 85–101. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-127-1_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Multi-Camera network"

1

Chang, I.-Cheng, Jia-Hong Yang und Yi-Hsiang Liao. „Multi-Camera Based Social Network Analysis“. In 2012 Eighth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP). IEEE, 2012. http://dx.doi.org/10.1109/iih-msp.2012.48.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gao, Yi, Wanneng Wu, Ao Liu, Qiaokang Liang und Jianwen Hu. „Multi-Target Multi-Camera Tracking with Spatial-Temporal Network“. In 2023 7th International Symposium on Computer Science and Intelligent Control (ISCSIC). IEEE, 2023. http://dx.doi.org/10.1109/iscsic60498.2023.00048.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ganti, Raghu, Mudhakar Srivatsa und B. S. Manjunath. „Entity reconciliation in a multi-camera network“. In ICDCN '16: 17th International Conference on Distributed Computing and Networking. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2833312.2849566.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mhiri, Rawia, Hichem Maiza, Stephane Mousset, Khaled Taouil, Pascal Vasseur und Abdelaziz Bensrhair. „Obstacle detection using unsynchronized multi-camera network“. In 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI). IEEE, 2015. http://dx.doi.org/10.1109/urai.2015.7358917.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Peng, Peixi, Yonghong Tian, Yaowei Wang und Tiejun Huang. „Multi-camera Pedestrian Detection with Multi-view Bayesian Network Model“. In British Machine Vision Conference 2012. British Machine Vision Association, 2012. http://dx.doi.org/10.5244/c.26.69.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Schriebl, Wolfgang, Thomas Winkler, Andreas Starzacher und Bernhard Rinner. „A pervasive smart camera network architecture applied for multi-camera object classification“. In 2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC). IEEE, 2009. http://dx.doi.org/10.1109/icdsc.2009.5289377.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Panda, Rameswar, Abir Dasy und Amit K. Roy-Chowdhury. „Video summarization in a multi-view camera network“. In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7900089.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shen, Edward, G. Peter K. Carr, Paul Thomas und Richard Hornsey. „Non-planar target for multi-camera network calibration“. In 2009 IEEE Sensors. IEEE, 2009. http://dx.doi.org/10.1109/icsens.2009.5398433.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Dai, Xiaochen, und Shahram Payandeh. „Tracked Object Association in Multi-camera Surveillance Network“. In 2013 IEEE International Conference on Systems, Man and Cybernetics (SMC 2013). IEEE, 2013. http://dx.doi.org/10.1109/smc.2013.724.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Junejo, Imran, Xiaochun Cao und Hassan Foroosh. „Geometry of a Non-Overlapping Multi-Camera Network“. In 2006 IEEE International Conference on Video and Signal Based Surveillance. IEEE, 2006. http://dx.doi.org/10.1109/avss.2006.53.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie