To see the other types of publications on this topic, follow the link: Multi-Camera network.

Dissertations / Theses on the topic 'Multi-Camera network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Multi-Camera network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhao, Jian. "Camera Planning and Fusion in a Heterogeneous Camera Network." UKnowledge, 2011. http://uknowledge.uky.edu/ece_etds/2.

Full text
Abstract:
Wide-area camera networks are becoming more and more common. They have widerange of commercial and military applications from video surveillance to smart home and from traffic monitoring to anti-terrorism. The design of such a camera network is a challenging problem due to the complexity of the environment, self and mutual occlusion of moving objects, diverse sensor properties and a myriad of performance metrics for different applications. In this dissertation, we consider two such challenges: camera planing and camera fusion. Camera planning is to determine the optimal number and placement of cameras for a target cost function. Camera fusion describes the task of combining images collected by heterogenous cameras in the network to extract information pertinent to a target application. I tackle the camera planning problem by developing a new unified framework based on binary integer programming (BIP) to relate the network design parameters and the performance goals of a variety of camera network tasks. Most of the BIP formulations are NP hard problems and various approximate algorithms have been proposed in the literature. In this dissertation, I develop a comprehensive framework in comparing the entire spectrum of approximation algorithms from Greedy, Markov Chain Monte Carlo (MCMC) to various relaxation techniques. The key contribution is to provide not only a generic formulation of the camera planning problem but also novel approaches to adapt the formulation to powerful approximation schemes including Simulated Annealing (SA) and Semi-Definite Program (SDP). The accuracy, efficiency and scalability of each technique are analyzed and compared in depth. Extensive experimental results are provided to illustrate the strength and weakness of each method. The second problem of heterogeneous camera fusion is a very complex problem. Information can be fused at different levels from pixel or voxel to semantic objects, with large variation in accuracy, communication and computation costs. My focus is on the geometric transformation of shapes between objects observed at different camera planes. This so-called the geometric fusion approach usually provides the most reliable fusion approach at the expense of high computation and communication costs. To tackle the complexity, a hierarchy of camera models with different levels of complexity was proposed to balance the effectiveness and efficiency of the camera network operation. Then different calibration and registration methods are proposed for each camera model. At last, I provide two specific examples to demonstrate the effectiveness of the model: 1)a fusion system to improve the segmentation of human body in a camera network consisted of thermal and regular visible light cameras and 2) a view dependent rendering system by combining the information from depth and regular cameras to collecting the scene information and generating new views in real time.
APA, Harvard, Vancouver, ISO, and other styles
2

Guillén, Alejandro. "Implementation of a Distributed Algorithm for Multi-camera Visual Feature Extraction in a Visual Sensor Network Testbed." Thesis, KTH, Kommunikationsnät, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167415.

Full text
Abstract:
Visual analysis tasks, like detection, recognition and tracking, are com- putationally intensive, and it is therefore challenging to perform such tasks in visual sensor networks, where nodes may be equipped with low power CPUs. A promising solution is to augment the sensor network with pro- cessing nodes, and to distribute the processing tasks among the process- ing nodes of the visual sensor network. The objective of this project is to enable a visual sensor network testbed to operate with multiple cam- era sensors, and to implement an algorithm that computes the allocation of the visual feature tasks to the processing nodes. In the implemented system, the processing nodes can receive and process data from differ- ent camera sensors simultaneously. The acquired images are divided into sub-images, the sizes of the sub-images are computed through solving a linear programming problem. The implemented algorithm performs local optimization in each camera sensor without data exchange with the other cameras in order to minimize the communication overhead and the data computational load of the camera sensors. The implementation work is performed on a testbed that consists of BeagleBone Black computers with IEEE 802.15.4 or IEEE 802.11 USB modules, and the existing code base is written in C++. The implementation is used to assess the performance of the distributed algorithm in terms of completion time. The results show a good performance providing lower average completion time.
APA, Harvard, Vancouver, ISO, and other styles
3

Jeong, Kideog. "OBJECT MATCHING IN DISJOINT CAMERAS USING A COLOR TRANSFER APPROACH." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/434.

Full text
Abstract:
Object appearance models are a consequence of illumination, viewing direction, camera intrinsics, and other conditions that are specific to a particular camera. As a result, a model acquired in one view is often inappropriate for use in other viewpoints. In this work we treat this appearance model distortion between two non-overlapping cameras as one in which some unknown color transfer function warps a known appearance model from one view to another. We demonstrate how to recover this function in the case where the distortion function is approximated as general affine and object appearance is represented as a mixture of Gaussians. Appearance models are brought into correspondence by searching for a bijection function that best minimizes an entropic metric for model dissimilarity. These correspondences lead to a solution for the transfer function that brings the parameters of the models into alignment in the UV chromaticity plane. Finally, a set of these transfer functions acquired from a collection of object pairs are generalized to a single camera-pair-specific transfer function via robust fitting. We demonstrate the method in the context of a video surveillance network and show that recognition of subjects in disjoint views can be significantly improved using the new color transfer approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Macknojia, Rizwan. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Full text
Abstract:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Huiqin. "Registration of egocentric views for collaborative localization in security applications." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG031.

Full text
Abstract:
Cette thèse s’intéresse à la localisation collaborative à partir d’une caméra mobile et d’une caméra statique pour des applications de vidéo-surveillance. Pour la surveillance d’évènements sensibles, la sécurité civile recours de plus en plus à des réseaux de caméras collaboratives incluant des caméras dynamiques et des caméras de surveillance traditionnelles, statiques. Il s’agit, dans des scènes de foules, de localiser d’une part le porteur de la caméra (typiquement agent de sécurité) mais également des évènements observés dans les images, afin de guider les secours par exemple. Cependant, les différences de point de vue entre la caméra mobile située au niveau du sol, et la caméra de vidéo-surveillance située en hauteur, couplées à la présence de motifs répétitifs et d’occlusions rendent les tâches de calibration et de localisation ardue. Nous nous sommes d’abord intéressés à la façon dont des informations issues de capteurs de localisation et d’orientation (GPS-IMU) bas-coût, pouvaient contribuer à raffiner l’estimation de la pose relative entre les caméras. Nous avons ensuite proposé de localiser la caméra mobile par la localisation de son épipole dans l’image de la caméra statique. Pour rendre robuste cette estimation vis-à-vis de la présence d’outliers en termes d’appariements de points clés, nous avons développé deux algorithmes. Le premier est basé sur une approche cumulative pour construire la carte d’incertitude de l’épipole. Le second, qui exploite de cadre de la théorie des fonctions de croyance et de son extension aux cadres de discernement 2D, nous a permis de proposer une contribution à la gestion d’un grand nombre de sources élémentaires, dont certaines incompatibles, basée sur un clustering des fonctions de croyances, particulièrement intéressant en vue de la combinaison avec d’autres sources (comme les détecteurs de piétons et/ou données GPS pour notre application). Enfin, la dernière partie concernant la géolocalisation des individus dans la scène, nous a conduit à étudier le problème de l’association de données entre les vues. Nous avons proposé d’utiliser des descripteurs et contraintes géométriques, en plus des descripteurs d’apparence classiques, dans la fonction de coût d’association. Nous avons montré la pertinence de ces informations géométriques qu’elles soient explicites, ou apprises à l’aide un réseau de neurones
This work focuses on collaborative localization between a mobile camera and a static camera for video surveillance. In crowd scenes and sensitive events, surveillance involves locating the wearer of the camera (typically a security officer) and also the events observed in the images (e.g., to guide emergency services). However, the different points of view between the mobile camera (at ground level), and the video surveillance camera (located high up), along with repetitive patterns and occlusions make difficult the tasks of relative calibration and localization. We first studied how low-cost positioning and orientation sensors (GPS-IMU) could help refining the estimate of relative pose between cameras. We then proposed to locate the mobile camera using its epipole in the image of the static camera. To make this estimate robust with respect to outlier keypoint matches, we developed two algorithms: either based on a cumulative approach to derive an uncertainty map, or exploiting the belief function framework. Facing with the issue of a large number of elementary sources, some of which are incompatible, we provide a solution based on a belief clustering, in the perspective of further combination with other sources (such as pedestrian detectors and/or GPS data for our application). Finally, the individual location in the scene led us to the problem of data association between views. We proposed to use geometric descriptors/constraints, in addition to the usual appearance descriptors. We showed the relevance of this geometric information whether it is explicit, or learned using a neural network
APA, Harvard, Vancouver, ISO, and other styles
6

Konda, Krishna Reddy. "Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367752.

Full text
Abstract:
The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the current needs, by choosing the type and number of sensors, as well as by adjusting the different camera parameters, as field-of-view, resolution and in case of active PTZ cameras pan,tilt and zoom. Further there is also a possibility of event driven automatic realignment of camera network to better observe the occurring event. Given the above mentioned possibilities, there are two objectives of this doctoral study. First objective consists of proposal of a state of the art camera placement and static reconfiguration algorithm and secondly we present a distributive, co-operative and dynamic camera reconfiguration algorithm for a network of cameras. Camera placement and user driven reconfiguration algorithm is based realistic virtual modelling of a given environment using particle swarm optimization. A real time camera reconfiguration algorithm which relies on motion entropy metric extracted from the H.264 compressed stream acquired by the camera is also presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Konda, Krishna Reddy. "Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1386/1/PhD-Thesis.pdf.

Full text
Abstract:
The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the current needs, by choosing the type and number of sensors, as well as by adjusting the different camera parameters, as field-of-view, resolution and in case of active PTZ cameras pan,tilt and zoom. Further there is also a possibility of event driven automatic realignment of camera network to better observe the occurring event. Given the above mentioned possibilities, there are two objectives of this doctoral study. First objective consists of proposal of a state of the art camera placement and static reconfiguration algorithm and secondly we present a distributive, co-operative and dynamic camera reconfiguration algorithm for a network of cameras. Camera placement and user driven reconfiguration algorithm is based realistic virtual modelling of a given environment using particle swarm optimization. A real time camera reconfiguration algorithm which relies on motion entropy metric extracted from the H.264 compressed stream acquired by the camera is also presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Dziri, Aziz. "Suivi visuel d'objets dans un réseau de caméras intelligentes embarquées." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22610/document.

Full text
Abstract:
Le suivi d’objets est de plus en plus utilisé dans les applications de vision par ordinateur. Compte tenu des exigences des applications en termes de performance, du temps de traitement, de la consommation d’énergie et de la facilité du déploiement des systèmes de suivi, l’utilisation des architectures embarquées de calcul devient primordiale. Dans cette thèse, nous avons conçu un système de suivi d’objets pouvant fonctionner en temps réel sur une caméra intelligente de faible coût et de faible consommation équipée d’un processeur embarqué ayant une architecture légère en ressources de calcul. Le système a été étendu pour le suivi d’objets dans un réseau de caméras avec des champs de vision non-recouvrant. La chaîne algorithmique est composée d’un étage de détection basé sur la soustraction de fond et d’un étage de suivi utilisant un algorithme probabiliste Gaussian Mixture Probability Hypothesis Density (GMPHD). La méthode de soustraction de fond que nous avons proposée combine le résultat fournie par la méthode Zipfian Sigma-Delta avec l’information du gradient de l’image d’entrée dans le but d’assurer une bonne détection avec une faible complexité. Le résultat de soustraction est traité par un algorithme d’analyse des composantes connectées afin d’extraire les caractéristiques des objets en mouvement. Les caractéristiques constituent les observations d’une version améliorée du filtre GMPHD. En effet, le filtre GMPHD original ne traite pas les occultations se produisant entre les objets. Nous avons donc intégré deux modules dans le filtre GMPHD pour la gestion des occultations. Quand aucune occultation n’est détectée, les caractéristiques de mouvement des objets sont utilisées pour le suivi. Dans le cas d’une occultation, les caractéristiques d’apparence des objets, représentées par des histogrammes en niveau de gris sont sauvegardées et utilisées pour la ré-identification à la fin de l’occultation. Par la suite, la chaîne de suivi développée a été optimisée et implémentée sur une caméra intelligente embarquée composée de la carte Raspberry Pi version 1 et du module caméra RaspiCam. Les résultats obtenus montrent une qualité de suivi proche des méthodes de l’état de l’art et une cadence d’images de 15 − 30 fps sur la caméra intelligente selon la résolution des images. Dans la deuxième partie de la thèse, nous avons conçu un système distribué de suivi multi-objet pour un réseau de caméras avec des champs non recouvrants. Le système prend en considération que chaque caméra exécute un filtre GMPHD. Le système est basé sur une approche probabiliste qui modélise la correspondance entre les objets par une probabilité d’apparence et une probabilité spatio-temporelle. L’apparence d’un objet est représentée par un vecteur de m éléments qui peut être considéré comme un histogramme. La caractéristique spatio-temporelle est représentée par le temps de transition des objets et la probabilité de transition d’un objet d’une région d’entrée-sortie à une autre. Le temps de transition est modélisé par une loi normale dont la moyenne et la variance sont supposées être connues. L’aspect distribué de l’approche proposée assure un suivi avec peu de communication entre les noeuds du réseau. L’approche a été testée en simulation et sa complexité a été analysée. Les résultats obtenus sont prometteurs pour le fonctionnement de l’approche dans un réseau de caméras intelligentes réel
Multi-object tracking constitutes a major step in several computer vision applications. The requirements of these applications in terms of performance, processing time, energy consumption and the ease of deployment of a visual tracking system, make the use of low power embedded platforms essential. In this thesis, we designed a multi-object tracking system that achieves real time processing on a low cost and a low power embedded smart camera. The tracking pipeline was extended to work in a network of cameras with nonoverlapping field of views. The tracking pipeline is composed of a detection module based on a background subtraction method and on a tracker using the probabilistic Gaussian Mixture Probability Hypothesis Density (GMPHD) filter. The background subtraction, we developed, is a combination of the segmentation resulted from the Zipfian Sigma-Delta method with the gradient of the input image. This combination allows reliable detection with low computing complexity. The output of the background subtraction is processed using a connected components analysis algorithm to extract the features of moving objects. The features are used as input to an improved version of GMPHD filter. Indeed, the original GMPHD do not manage occlusion problems. We integrated two new modules in GMPHD filter to handle occlusions between objects. If there are no occlusions, the motion feature of objects is used for tracking. When an occlusion is detected, the appearance features of the objects are saved to be used for re-identification at the end of the occlusion. The proposed tracking pipeline was optimized and implemented on an embedded smart camera composed of the Raspberry Pi version 1 board and the camera module RaspiCam. The results show that besides the low complexity of the pipeline, the tracking quality of our method is close to the stat of the art methods. A frame rate of 15 − 30 was achieved on the smart camera depending on the image resolution. In the second part of the thesis, we designed a distributed approach for multi-object tracking in a network of non-overlapping cameras. The approach was developed based on the fact that each camera in the network runs a GMPHD filter as a tracker. Our approach is based on a probabilistic formulation that models the correspondences between objects as an appearance probability and space-time probability. The appearance of an object is represented by a vector of m dimension, which can be considered as a histogram. The space-time features are represented by the transition time between two input-output regions in the network and the transition probability from a region to another. Transition time is modeled as a Gaussian distribution with known mean and covariance. The distributed aspect of the proposed approach allows a tracking over the network with few communications between the cameras. Several simulations were performed to validate the approach. The obtained results are promising for the use of this approach in a real network of smart cameras
APA, Harvard, Vancouver, ISO, and other styles
9

Tahir, Syed Fahad. "Resource-constrained re-identification in camera networks." Thesis, Queen Mary, University of London, 2016. http://qmro.qmul.ac.uk/xmlui/handle/123456789/36123.

Full text
Abstract:
In multi-camera surveillance, association of people detected in different camera views over time, known as person re-identification, is a fundamental task. Re-identification is a challenging problem because of changes in the appearance of people under varying camera conditions. Existing approaches focus on improving the re-identification accuracy, while no specific effort has yet been put into efficiently utilising the available resources that are normally limited in a camera network, such as storage, computation and communication capabilities. In this thesis, we aim to perform and improve the task of re-identification under constrained resources. More specifically, we reduce the data needed to represent the appearance of an object through a proposed feature selection method and a difference-vector representation method. The proposed feature-selection method considers the computational cost of feature extraction and the cost of storing the feature descriptor jointly with the feature's re-identification performance to select the most cost-effective and well-performing features. This selection allows us to improve inter-camera re-identification while reducing storage and computation requirements within each camera. The selected features are ranked in the order of effectiveness, which enable a further reduction by dropping the least effective features when application constraints require this conformity. We also reduce the communication overhead in the camera network by transferring only a difference vector, obtained from the extracted features of an object and the reference features within a camera, as an object representation for the association. In order to reduce the number of possible matches per association, we group the objects appearing within a defined time-interval in un-calibrated camera pairs. Such a grouping improves the re-identification, since only those objects that appear within the same time-interval in a camera pair are needed to be associated. For temporal alignment of cameras, we exploit differences between the frame numbers of the detected objects in a camera pair. Finally, in contrast to pairwise camera associations used in literature, we propose a many-to-one camera association method for re-identification, where multiple cameras can be candidates for having generated the previous detections of an object. We obtain camera-invariant matching scores from the scores obtained using the pairwise re-identification approaches. These scores measure the chances of a correct match between the objects detected in a group of cameras. Experimental results on publicly available and in-lab multi-camera image and video datasets show that the proposed methods successfully reduce storage, computation and communication requirements while improving the re-identification rate compared to existing re-identification approaches.
APA, Harvard, Vancouver, ISO, and other styles
10

Yildiz, Enes. "PROVIDING MULTI-PERSPECTIVE COVERAGE IN WIRELESS MULTIMEDIA SENSOR NETWORKS." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/717.

Full text
Abstract:
Deployment of cameras in Wireless Multimedia Sensor Networks (WMSNs) is crucial in achieving good coverage, accuracy and fault tolerance. With the decreased costs of wireless cameras, WMSNs provide opportunities for redundant camera deployment in order to get multiple disparate views of events. Referred to as multi-perspective coverage (MPC), this thesis proposes an optimal solution for camera deployment that can achieve full MPC for a given region. The solution is based on a Bi-Level mixed integer program (MIP) which works by solving two sub-problems named master and sub-problems. The master problem identifies a solution based on an initial set of points and then calls the sub-problem to cover the uncovered points iteratively. The Bi-Level algorithm is then revised to provide MPC with the minimum cost in Heteregeneous Visual Sensor Networks (VSNs) where cameras may have different price, resolution, Field-of-View (FoV) and Depth-of-Field (DoF). For a given average resolution, area, and variety of camera sensors, we propose a deployment algorithm which minimizes the total cost while guaranteeing 100\% MPC of the area and a minimum resolution. Furthermore, revised Bi-level algorithm provides the flexibility of achieving required resolution on sub-regions for a given region. The numerical results show the superiority of our approach with respect to existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
11

Michieletto, Giulia. "Multi-Agent Systems in Smart Environments - from sensor networks to aerial platform formations." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427273.

Full text
Abstract:
In the last twenty years, the advancements in pervasive computing and ambient intelligence have lead to a fast development of smart environments, where various cyber-physical systems are required to interact for the purpose of improving human life. The effectiveness of a smart environment rests thus upon the cooperation of multiple entities under the constraints of real-time high-level performance. In this perspective the role of multi-agent systems is evident due to capability of these architectures involving large sets of interactive devices to solve complex tasks by exploiting local computation and communication. Although all the multi-agent systems arise for scalability, robustness and autonomicity, these networked architectures can be distinguished according to the characteristics of the composing elements. In this thesis, three kinds of multi-agent systems are taken into account and for each of them innovative distributed solutions are proposed to solve typical issues related to smart environments. Wireless Sensor Networks - The first part of the thesis is focused on the development of effective clustering strategies for wireless sensor network deployed in industrial envi- ronment. Accounting for both data clustering and network decomposition, a centralized and a distributed algorithms are proposed for grouping nodes into local non-overlapping clusters in order to enhance the network self-organization capabilities. Multi-Camera Systems - The second part of the thesis deals with the surveillance task for networks of interoperating smart visual sensors. First, the attitude estimation step is handled facing the determination of the orientation of each device in the group with respect to a global inertial frame. Afterwards, the perimeter patrolling problem is addressed, within the border of a certain area is required to be repeatedly monitored by a set of planar cameras. Both issues are recast in the distributed optimization framework and solved through the iterative minimization of a suitable cost function. Aerial Platform Formations - The third part of the thesis is devoted to the autonomous aerial platforms. Focusing on a single vehicle, two desirable properties are investigated, namely the possibility to independently control the position and the attitude and the robustness to the loss of a motor. Two non-linear controllers are then designed to maintain a platform in static hovering keeping constant reference position with constant attitude. Finally, the interest is moved to swarms of aerial platforms aiming at both stabilizing a given formation and steering it along pre-definite directions. For this purpose, the bearing rigidity theory is studied for frameworks embedded in the three-dimensional Special Euclidean space. The thesis thus evolves from fixed to fully actuated multi-agent systems accounting for smart environments applications dealing with an increasing number of DoFs.
Nell’ultimo ventennio, i progressi nel campo della computazione pervasiva e dell’intelligenza ambientale hanno portato ad un rapido sviluppo di ambienti smart, dove più sistemi cyber-fisici sono chiamati ad interagire al fine di migliorare la vita umana. L’efficacia di un ambiente smart si basa pertanto sulla collaborazione di diverse entità vincolate a fornire prestazioni di alto livello in tempo reale. In quest’ottica, il ruolo dei sistemi multi-agente è evidente grazie alla capacità di queste architetture, che coinvolgono gruppi di dispositivi capaci di interagire tra loro, di risolvere compiti complessi sfruttando calcoli e comunicazioni locali. Sebbene tutti i sistemi multi-agenti si caratterizzino per scalabilità, robustezza e autonomia, queste architetture possono essere distinte in base alle proprietà degli elementi che le compongono. In questa tesi si considerano tre tipi di sistemi multi-agenti e per ciascuno di questi sono proposte soluzioni distribuite e innovative volte a risolvere problemi tipici per gli ambienti smart. Reti di Sensori Wireless - La prima parte della tesi è incentrata sullo sviluppo di efficaci strategie di clustering per le reti di sensori wireless impiegate in ambito industriale. Tenendo conto sia dei dati acquisiti che della topologia di rete, sono proposti due algoritmi (uno centralizzato e uno distribuito) volti a raggruppare i nodi in clusters locali non sovrapposti per migliorare le capacità di auto-organizzazione del sistema. Sistemi Multi-Camera - La seconda parte della tesi affronta il problema di videosorveglianza nel contesto di reti di sensori visivi intelligenti. In primo luogo, è considerata la stima di assetto che prevede la ricostruzione dell’orientamento di ogni agente appartenente al sistema rispetto ad un sistema globale inerziale. In seguito, è affrontato il problema di pattugliamento perimetrale, secondo il quale i confini di una certa area devono essere ripetutamente monitorati da un insieme di videocamere. Entrambe le problematiche sono trattate nell’ambito dell’ottimizzazione distribuita e risolte attraverso la minimizzazione iterativa di un’adeguata funzione costo. Formazioni di Piattaforme Aeree - La terza parte della tesi è dedicata alle piattaforme aeree autonome. Concentrandosi sul singolo veicolo, sono valutate due proprietà, ovvero la capacità di controllare indipendentemente la posizione e l’assetto e la robustezza rispetto alla perdita di un motore. Sono quindi descritti due controllori non lineari che mirano a mantenere una data piattaforma in hovering statico in posizione fissa con orien- tamento costante. Infine, l’attenzione è volta agli stormi di piattaforme aeree, studiando sia la stabilizzazione di una determinata formazione che il controllo del movimento lungo direzioni prefissate. A tal fine viene studiata la teoria della bearing rigidità per i sistemi che evolvono nello spazio speciale euclideo tri-dimensionale. La tesi evolve dunque dallo studio di sistemi multi-agenti fissi a totalmente attuati usati in applicazioni per ambienti smart in cui il numero di gradi di libertà da gestire è incrementale.
APA, Harvard, Vancouver, ISO, and other styles
12

Howard, Shaun Michael. "Deep Learning for Sensor Fusion." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1495751146601099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Po-Yen, and 陳柏諺. "Software Defined Multi-Camera Network." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/b4656h.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
105
The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in Software-Defined Networking (SDN). We propose the Software-Defined Multi-Camera Network, which is a flexible network platform constructed by the Software-Defined Networking (SDN) controller, OpenVirteX, OpenFlow switch and Software-Defined Cameras. We use the Raspberry Pi as our Software-Defined Camera, since it is small, cheap, and flexible, unlike the traditional network camera. We implement a motion detection function on the Raspberry Pi which can cooperate with the SDN controller module we implement. We design an authentication mechanism to manage the cameras and the users in the network, and divide them into different virtual networks. We modify the module in the OpenVirteX to provide different QoS (Quality of Service) for different virtual networks according to the priority. As the simulation results show, different QoS settings can work properly and the network delay overhead is less than 6%. Because the SDN controller has a view of the network layer, we use the Web User Interface, Smart Campus Application, and Backend Database running on the application layer to improve the performance in the network layer. As the simulation results show, the video stream seen by the user can switch faster between different cameras in the Software-Defined Multi-Camera Network than in the legacy network.
APA, Harvard, Vancouver, ISO, and other styles
14

Kulkarni, Purushottam. "SensEye: A multi-tier heterogeneous camera sensor network." 2007. https://scholarworks.umass.edu/dissertations/AAI3254905.

Full text
Abstract:
Rapid technological developments in sensing devices, embedded platforms and wireless communication technologies, have enabled and led to a large research focus in sensor networks. Traditional sensor networks have been designed as networks of homogeneous sensor nodes. Single-tier networks consisting of homogeneous nodes achieve only a subset of application requirements and often sacrifice others. In this thesis, I propose the notion of multi-tier heterogeneous sensor networks, sensors organized hierarchically into multiple tiers. With intelligent use of resources across tiers, multi-tier heterogeneous sensor networks have the potential to simultaneously achieve the conflicting goals of network lifetime, sensing reliability and functionality. I consider a class of sensor networks---camera sensor networks---wireless networks with image sensors. I address the issues of automatic configuration and initialization and design of camera sensor networks. Like any sensor network, initialization of cameras is an important pre-requisite for camera sensor networks applications. Since, camera sensor networks have varying degrees of infrastructure support and resource constraints a single initialization procedure is not appropriate. I have proposed the notions of accurate and approximate initialization to initialize cameras with varying capabilities and resource constraints. I have developed and empirically evaluated Snapshot, an accurate calibration protocol tailored for sensor network deployments. I have also developed approximate initialization techniques that estimate the degree of overlap and region of overlap estimates at each camera. Further, I demonstrate usage of these estimates to instantiate camera sensor network applications. As compared to manual calibration, which can take a long time (order of hours) to calibrate several cameras, is inefficient and error prone, the automated calibration protocol is accurate and greatly reduces the time for accurate calibration---tens of seconds to calibrate a single camera and can easily scale to calibrate several cameras in order of minutes. The approximate techniques demonstrate feasibility of initializing low-power resource constrained cameras with no or limited infrastructure support. With regards to design of camera sensor networks, I present the design and implementation of SensEye, a multi-tier heterogeneous camera sensor network and address the issue of energy-reliability tradeoff. Multi-tier networks provide several levels of reliability and energy usage based on the type of sensor used for application tasks. Using SensEye I demonstrate how multi-tier networks can achieve simultaneous system goals of energy efficiency and reliability.
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Jia-Hong, and 楊家泓. "Multi-Camera Based Social Network and Personality Analysis." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/60056296390991208160.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
99
Social network analysis is always a popular topic, and social network is defined as a collection of social interactions between members of group. We can understand social interactions between members through social network analysis, and this analysis can be applied to all people related fields. In school, teachers always take care about sub-groups, leaders and isolates of students. In group psychotherapy, social interactions between members are one indicator of treatment outcomes. In online social, discovering online social network helps contractors developing user-friendly products. Personality analysis is also a popular topic, and a person’s behavior style is called his personality. Personality analysis also can be used to all people related fields. In school, students who have different behavior styles need different education. In interview, companies employ people who have the personality that they need. In criminology, analyzing criminal’s behavior style helps for solving a criminal case. However, psychological social network analysis and personality analysis use written tests and a lot labor to get information of friendly social interactions, and present social interactions with directional relations. Social network analysis based on technology of computer vision only uses the frequency of people appear together as feature of social relation, and presents social interactions with non-directional relations. Therefore, we employ a multi-camera system with technology of computer vision to analyze people’s social behaviors. A social behavior consists of a target, body sign and emotion information. Through analyzing people’s social interactions, we can discover people’s social attitudes to other members, and these attitudes construct the social network. Beside friendly relations, we also consider about hostile relations, and we use directional relations to present people’s social interactions. They make our social network analysis closer to reality. Through analyzing a person’s all social behaviors, we can discover his tendency of behaviors, and it’s the personality. Finally, experiments show that we can discover social network and personality through analyzing people’s social interactions, and the results of analysis are similar to ground truth made by people observing. Besides, we can save a lot labor than psychological social network analysis and personality analysis.
APA, Harvard, Vancouver, ISO, and other styles
16

Peng, Yi-Hong, and 彭依弘. "The Design of Multi-Object Tracking System in a Multi-Camera Network." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/zpmg55.

Full text
Abstract:
碩士
國立交通大學
電控工程研究所
104
Nowadays, in the field of the public security surveillance environment, surveillance cameras are often used to record the societal security and criminal events. However, there are more surveillance cameras when the supervisors browse the video after events happen. It will cause a lot of times and human resources. According to the above-mentioned problems, this thesis designs a system which tracks the multi-object in a multi-camera network. The users can choose the objects from the video chips and the system will track them across different cameras. There are three contributions in this thesis. First, this thesis proposes a feature modulation mechanism. It can help the system track different objects accurately. Second, this thesis proposes a switching multi-camera mechanism. Though the architecture of the multi-camera network, the system determines the next camera which the objects will appear to improve the tracking efficiency. Third, this thesis completes the prototype of the multi-object in a multi-camera network. Then the system integrates the information of objects and cameras into the monitor system and reduces the burden which supervisors investigate video afterwards.
APA, Harvard, Vancouver, ISO, and other styles
17

Rizwan, Macknojia. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces." Thèse, 2013. http://hdl.handle.net/10393/23976.

Full text
Abstract:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO, and other styles
18

Beach, David Michael. "Multi-camera benchmark localization for mobile robot networks." 2004. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=362308&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Beach, David Michael. "Multi-camera benchmark localization for mobile robot networks." 2005. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=369735&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Raman, R. "Study on models for smart surveillance through multi-camera networks." Thesis, 2013. http://ethesis.nitrkl.ac.in/5643/1/Final_Thesis.pdf.

Full text
Abstract:
With ever changing world, visual surveillance once a distinctive issue has now became an indispensable component of surveillance system and multi-camera network are the most suited way to achieve them. Even though multi-camera network has manifold advantage over single camera based surveillance, still it adds overheads towards processing, memory requirement, energy consumption, installation costs and complex handling of the system.This thesis explores different challenges in the domain of multi-camera network and surveys the issue of camera calibration and localization. The survey presents an in-depth study of evolution of camera localization over the time. This study helps in realizing the complexity as well as necessity of camera localization in multi-camera network.This thesis proposes smart visual surveillance model that study phases of multi-camera network development model and proposes algorithms at the level of camera placement and camera control. It proposes camera placement technique for gait pattern recognition and a smart camera control governed by occlusion determination algorithm that leads to reducing the number of active camera thus eradicating many overheads yet not compromising with the standards of surveillance. The proposed camera placement technique has been tested over self-acquired data from corridor of Vikram Sarabhai Hall of Residence, NIT Rourkela. The proposed algorithm provides probable places for camera placement in terms of 3D plot depicting the suitability of camera placement for gait pattern recognition.The control flow between cameras is governed by a three step algorithm that works on direction and apparent speed estimation of moving subjects to determine the chances of occlusion between them. The algorithms are tested over self-acquired as well as existing gait database CASIA Dataset A for direction determination as well as occlusion estimation.
APA, Harvard, Vancouver, ISO, and other styles
21

Song, Chang-Yu, and 宋長諭. "Exploiting Inter-View Correlation for Bandwidth-Efficient Data Gathering in Wireless Multi-Camera Networks." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/59220995283443303629.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
103
In this thesis, we investigate the problem of correlated data gathering in the wireless multi-camera networks by considering the I-frame selection problem and the P-frame association problem. Since multiple cameras may be deployed in a neighborhood area with overlapping perspectives of the street views, we exploit the capability of transmission overhearing among cameras. If a camera can overhear transmissions from previous scheduled nearby cameras, it can reference the image and reduce the amount of bits required to be delivered to the aggregator by performing the multiview encoding technique. Unlike related works often use geometric information to predict correlation among cameras, we refer to the multiview video encoder for measuring realistic cameras correlation such that no performance loss will be caused due to prediction error. We further propose three I-frame selection algorithms based on branch-and-bound, simulated annealing, and graph approximation. We also introduce a P-frame association method to determine reference structure for all cameras such that the amount of required transmission bits can be minimized. Besides, for real-world applications, it might require multiple transmission rounds for delivering the collected images back to the data aggregator. Therefore, in this thesis, we also describe how to apply the correlated data gathering scheme via overhearing source coding for more than one transmission rounds. To evaluate the proposed algorithms, we resort to a 3D modeling software to generate quasi-realistic city views for all cameras and use a H.264 multiview video encoding reference software to encode collected images. Based on the evaluation for a semi-realistic multi-camera network, we compare the performance gain of our three proposed algorithms with a baseline approaches and point out the trade-off among the three proposed methods. That is, the graph approximation algorithm can perform well when the network is high correlated, whereas the simulated annealing algorithm might be a better choice if the network correlation level becomes low. We also show that our proposed approaches can result in 35% transmission reduction for high correlated multi-camera networks, however, only 15% can be reduced if geometric correlation is applied. The results thus motivate further investigation along this direction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography