Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Système de multi-Camera.

Dissertationen zum Thema „Système de multi-Camera“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-46 Dissertationen für die Forschung zum Thema "Système de multi-Camera" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Mennillo, Laurent. „Reconstruction 3D de l'environnement dynamique d'un véhicule à l'aide d'un système multi-caméras hétérogène en stéréo wide-baseline“. Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC022/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse a été réalisée dans le secteur de l'industrie automobile, en collaboration avec le Groupe Renault et concerne en particulier le développement de systèmes d'aide à la conduite avancés et de véhicules autonomes. Les progrès réalisés par la communauté scientifique durant les dernières décennies, dans les domaines de l'informatique et de la robotique notamment, ont été si importants qu'ils permettent aujourd'hui la mise en application de systèmes complexes au sein des véhicules. Ces systèmes visent dans un premier temps à réduire les risques inhérents à la conduite en assistant les conducteurs, puis dans un second temps à offrir des moyens de transport entièrement autonomes. Les méthodes de SLAM multi-objets actuellement intégrées au sein de ces véhicules reposent pour majeure partie sur l'utilisation de capteurs embarqués très performants tels que des télémètres laser, au coût relativement élevé. Les caméras numériques en revanche, de par leur coût largement inférieur, commencent à se démocratiser sur certains véhicules de grande série et assurent généralement des fonctions d'assistance à la conduite, pour l'aide au parking ou le freinage d'urgence, par exemple. En outre, cette implantation plus courante permet également d'envisager leur utilisation afin de reconstruire l'environnement dynamique proche des véhicules en trois dimensions. D'un point de vue scientifique, les techniques de SLAM visuel multi-objets existantes peuvent être regroupées en deux catégories de méthodes. La première catégorie et plus ancienne historiquement concerne les méthodes stéréo, faisant usage de plusieurs caméras à champs recouvrants afin de reconstruire la scène dynamique observée. La plupart reposent en général sur l'utilisation de paires stéréo identiques et placées à faible distance l'une de l'autre, ce qui permet un appariement dense des points d'intérêt dans les images et l'estimation de cartes de disparités utilisées lors de la segmentation du mouvement des points reconstruits. L'autre catégorie de méthodes, dites monoculaires, ne font usage que d'une unique caméra lors du processus de reconstruction. Cela implique la compensation du mouvement propre du système d'acquisition lors de l'estimation du mouvement des autres objets mobiles de la scène de manière indépendante. Plus difficiles, ces méthodes posent plusieurs problèmes, notamment le partitionnement de l'espace de départ en plusieurs sous-espaces représentant les mouvements individuels de chaque objet mobile, mais aussi le problème d'estimation de l'échelle relative de reconstruction de ces objets lors de leur agrégation au sein de la scène statique. La problématique industrielle de cette thèse, consistant en la réutilisation des systèmes multi-caméras déjà implantés au sein des véhicules, majoritairement composés d'un caméra frontale et de caméras surround équipées d'objectifs très grand angle, a donné lieu au développement d'une méthode de reconstruction multi-objets adaptée aux systèmes multi-caméras hétérogènes en stéréo wide-baseline. Cette méthode est incrémentale et permet la reconstruction de points mobiles éparses, grâce notamment à plusieurs contraintes géométriques de segmentation des points reconstruits ainsi que de leur trajectoire. Enfin, une évaluation quantitative et qualitative des performances de la méthode a été menée sur deux jeux de données distincts, dont un a été développé durant ces travaux afin de présenter des caractéristiques similaires aux systèmes hétérogènes existants
This Ph.D. thesis, which has been carried out in the automotive industry in association with Renault Group, mainly focuses on the development of advanced driver-assistance systems and autonomous vehicles. The progress made by the scientific community during the last decades in the fields of computer science and robotics has been so important that it now enables the implementation of complex embedded systems in vehicles. These systems, primarily designed to provide assistance in simple driving scenarios and emergencies, now aim to offer fully autonomous transport. Multibody SLAM methods currently used in autonomous vehicles often rely on high-performance and expensive onboard sensors such as LIDAR systems. On the other hand, digital video cameras are much cheaper, which has led to their increased use in newer vehicles to provide driving assistance functions, such as parking assistance or emergency braking. Furthermore, this relatively common implementation now allows to consider their use in order to reconstruct the dynamic environment surrounding a vehicle in three dimensions. From a scientific point of view, existing multibody visual SLAM techniques can be divided into two categories of methods. The first and oldest category concerns stereo methods, which use several cameras with overlapping fields of view in order to reconstruct the observed dynamic scene. Most of these methods use identical stereo pairs in short baseline, which allows for the dense matching of feature points to estimate disparity maps that are then used to compute the motions of the scene. The other category concerns monocular methods, which only use one camera during the reconstruction process, meaning that they have to compensate for the ego-motion of the acquisition system in order to estimate the motion of other objects. These methods are more difficult in that they have to address several additional problems, such as motion segmentation, which consists in clustering the initial data into separate subspaces representing the individual movement of each object, but also the problem of the relative scale estimation of these objects before their aggregation within the static scene. The industrial motive for this work lies in the use of existing multi-camera systems already present in actual vehicles to perform dynamic scene reconstruction. These systems, being mostly composed of a front camera accompanied by several surround fisheye cameras in wide-baseline stereo, has led to the development of a multibody reconstruction method dedicated to such heterogeneous systems. The proposed method is incremental and allows for the reconstruction of sparse mobile points as well as their trajectory using several geometric constraints. Finally, a quantitative and qualitative evaluation conducted on two separate datasets, one of which was developed during this thesis in order to present characteristics similar to existing multi-camera systems, is provided
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Petit, Benjamin. „Téléprésence, immersion et interactions pour le reconstruction 3D temps-réel“. Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00584001.

Der volle Inhalt der Quelle
Annotation:
Les environnements 3D immersifs et collaboratifs en ligne sont en pleine émergence. Ils posent les problématiques du sentiment de présence au sein des mondes virtuels, de l'immersion et des capacités d'interaction. Les systèmes 3D multi-caméra permettent, sur la base d'une information photométrique, d'extraire une information géométrique (modèle 3D) de la scène observée. Il est alors possible de calculer un modèle numérique texturé en temps-réel qui est utilisé pour assurer la présence de l'utilisateur dans l'espace numérique. Dans le cadre de cette thèse nous avons étudié comment coupler la capacité de présence fournie par un tel système avec une immersion visuelle et des interactions co-localisées. Ceci a mené à la réalisation d'une application qui couple un visio-casque, un système de suivi optique et un système multi-caméra. Ainsi l'utilisateur peut visualiser son modèle 3D correctement aligné avec son corps et mixé avec les objets virtuels. Nous avons aussi mis en place une expérimentation de télépresence sur 3 sites (Bordeaux, Grenoble, Orléans) qui permet à plusieurs utilisateurs de se rencontrer en 3D et de collaborer à distance. Le modèle 3D texturé donne une très forte impression de présence de l'autre et renforce les interactions physiques grâce au langage corporel et aux expressions faciales. Enfin, nous avons étudié comment extraire une information de vitesse à partir des informations issues des caméras, grâce au flot optique et à des correspondances 2D et 3D, nous pouvons estimer le déplacement dense du modèle 3D. Cette donnée étend les capacités d'interaction en enrichissant le modèle 3D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kim, Jae-Hak, und Jae-Hak Kim@anu edu au. „Camera Motion Estimation for Multi-Camera Systems“. The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Der volle Inhalt der Quelle
Annotation:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kim, Jae-Hak. „Camera motion estimation for multi-camera systems /“. View thesis entry in Australian Digital Theses Program, 2008. http://thesis.anu.edu.au/public/adt-ANU20081211.011120/index.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jiang, Xiaoyan [Verfasser]. „Multi-Object Tracking-by-Detection Using Multi-Camera Systems / Xiaoyan Jiang“. München : Verlag Dr. Hut, 2016. http://d-nb.info/1084385325/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Krucki, Kevin C. „Person Re-identification in Multi-Camera Surveillance Systems“. University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1448997579.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hammarlund, Emil. „Target-less and targeted multi-camera color calibration“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33876.

Der volle Inhalt der Quelle
Annotation:
Multiple camera arrays are beginning to see more widespread use in a variety of different applications, be it for research purposes or for enhancing the view- ing experience in entertainment. However, when using multiple cameras the images produced are often not color consistent due to a variety of different rea- sons such as differences in lighting, chip-level differences e.t.c. To address this there exists a multitude of different color calibration algorithms. This paper ex- amines two different color calibration algorithms one targeted and one target- less. Both methods were implemented in Python using the libraries OpenCV, Matplotlib, and NumPy. Once the algorithms had been implemented, they were evaluated based on two metrics; color range homogeneity and color ac- curacy to target values. The targeted color calibration algorithm was more ef- fective improving the color accuracy to ground truth then the target-less color calibration algorithm, but the target-less algorithm deteriorated the color range homogeneity less than the targeted color calibration algorithm. After both methods where tested, an improvement of the targeted color calibration al- gorithm was attempted. The resulting images were then evaluated based on the same two criteria as before, the modified version of the targeted color cal- ibration algorithm performed better than the original targeted algorithm with respect to color range homogeneity while still maintaining a similar level of performance with respect to color accuracy to ground truth as before. Further- more, when the color range homogeneity of the modified targeted algorithm was compared with the color range homogeneity of the target-less algorithm. The performance of the modified targeted algorithm performed similarly to the target-less algorithm. Based on these results, it was concluded that the targeted color calibration was superior to the target-less algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Åkesson, Ulrik. „Design of a multi-camera system for object identification, localisation, and visual servoing“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44082.

Der volle Inhalt der Quelle
Annotation:
In this thesis, the development of a stereo camera system for an intelligent tool is presented. The task of the system is to identify and localise objects so that the tool can guide a robot. Different approaches to object detection have been implemented and evaluated and the systems ability to localise objects has been tested. The results show that the system can achieve a localisation accuracy below 5 mm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Turesson, Eric. „Multi-camera Computer Vision for Object Tracking: A comparative study“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21810.

Der volle Inhalt der Quelle
Annotation:
Background: Video surveillance is a growing area where it can help with deterring crime, support investigation or to help gather statistics. These are just some areas where video surveillance can aid society. However, there is an improvement that could increase the efficiency of video surveillance by introducing tracking. More specifically, tracking between cameras in a network. Automating this process could reduce the need for humans to monitor and review since the tracking can track and inform the relevant people on its own. This has a wide array of usability areas, such as forensic investigation, crime alerting, or tracking down people who have disappeared. Objectives: What we want to investigate is the common setup of real-time multi-target multi-camera tracking (MTMCT) systems. Next up, we want to investigate how the components in an MTMCT system affect each other and the complete system. Lastly, we want to see how image enhancement can affect the MTMCT. Methods: To achieve our objectives, we have conducted a systematic literature review to gather information. Using the information, we implemented an MTMCT system where we evaluated the components to see how they interact in the complete system. Lastly, we implemented two image enhancement techniques to see how they affect the MTMCT. Results: As we have discovered, most often, MTMCT is constructed using a detection for discovering object, tracking to keep track of the objects in a single camera and a re-identification method to ensure that objects across cameras have the same ID. The different components have quite a considerable effect on each other where they can sabotage and improve each other. An example could be that the quality of the bounding boxes affect the data which re-identification can extract. We discovered that the image enhancement we used did not introduce any significant improvement. Conclusions: The most common structure for MTMCT are detection, tracking and re-identification. From our finding, we can see that all the component affect each other, but re-identification is the one that is mostly affected by the other components and the image enhancement. The two tested image enhancement techniques could not introduce enough improvement, but other image enhancement could be used to make the MTMCT perform better. The MTMCT system we constructed did not manage to reach real-time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nadella, Suman. „Multi camera stereo and tracking patient motion for SPECT scanning systems“. Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082905-161037/.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Feature matching in multiple cameras; Multi camera stereo computation; Patient Motion Tracking; SPECT Imaging Includes bibliographical references. (p.84-88)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Knorr, Moritz [Verfasser]. „Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing / Moritz Knorr“. Karlsruhe : KIT Scientific Publishing, 2018. http://www.ksp.kit.edu.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Sankaranarayanan, Aswin C. „Robust and efficient inference of scene and object motion in multi-camera systems“. College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9855.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.) -- University of Maryland, College Park, 2009.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Bachnak, Rafic A. „Development of a stereo-based multi-camera system for 3-D vision“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1172005477.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Knorr, Moritz [Verfasser], und C. [Akademischer Betreuer] Stiller. „Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing / Moritz Knorr ; Betreuer: C. Stiller“. Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1154856798/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Esquivel, Sandro [Verfasser]. „Eye-to-Eye Calibration - Extrinsic Calibration of Multi-Camera Systems Using Hand-Eye Calibration Methods / Sandro Esquivel“. Kiel : Universitätsbibliothek Kiel, 2015. http://d-nb.info/1073150615/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Lamprecht, Bernhard. „A testbed for vision based advanced driver assistance systems with special emphasis on multi-camera calibration and depth perception /“. Aachen : Shaker, 2008. http://d-nb.info/990314847/04.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Michieletto, Giulia. „Multi-Agent Systems in Smart Environments - from sensor networks to aerial platform formations“. Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427273.

Der volle Inhalt der Quelle
Annotation:
In the last twenty years, the advancements in pervasive computing and ambient intelligence have lead to a fast development of smart environments, where various cyber-physical systems are required to interact for the purpose of improving human life. The effectiveness of a smart environment rests thus upon the cooperation of multiple entities under the constraints of real-time high-level performance. In this perspective the role of multi-agent systems is evident due to capability of these architectures involving large sets of interactive devices to solve complex tasks by exploiting local computation and communication. Although all the multi-agent systems arise for scalability, robustness and autonomicity, these networked architectures can be distinguished according to the characteristics of the composing elements. In this thesis, three kinds of multi-agent systems are taken into account and for each of them innovative distributed solutions are proposed to solve typical issues related to smart environments. Wireless Sensor Networks - The first part of the thesis is focused on the development of effective clustering strategies for wireless sensor network deployed in industrial envi- ronment. Accounting for both data clustering and network decomposition, a centralized and a distributed algorithms are proposed for grouping nodes into local non-overlapping clusters in order to enhance the network self-organization capabilities. Multi-Camera Systems - The second part of the thesis deals with the surveillance task for networks of interoperating smart visual sensors. First, the attitude estimation step is handled facing the determination of the orientation of each device in the group with respect to a global inertial frame. Afterwards, the perimeter patrolling problem is addressed, within the border of a certain area is required to be repeatedly monitored by a set of planar cameras. Both issues are recast in the distributed optimization framework and solved through the iterative minimization of a suitable cost function. Aerial Platform Formations - The third part of the thesis is devoted to the autonomous aerial platforms. Focusing on a single vehicle, two desirable properties are investigated, namely the possibility to independently control the position and the attitude and the robustness to the loss of a motor. Two non-linear controllers are then designed to maintain a platform in static hovering keeping constant reference position with constant attitude. Finally, the interest is moved to swarms of aerial platforms aiming at both stabilizing a given formation and steering it along pre-definite directions. For this purpose, the bearing rigidity theory is studied for frameworks embedded in the three-dimensional Special Euclidean space. The thesis thus evolves from fixed to fully actuated multi-agent systems accounting for smart environments applications dealing with an increasing number of DoFs.
Nell’ultimo ventennio, i progressi nel campo della computazione pervasiva e dell’intelligenza ambientale hanno portato ad un rapido sviluppo di ambienti smart, dove più sistemi cyber-fisici sono chiamati ad interagire al fine di migliorare la vita umana. L’efficacia di un ambiente smart si basa pertanto sulla collaborazione di diverse entità vincolate a fornire prestazioni di alto livello in tempo reale. In quest’ottica, il ruolo dei sistemi multi-agente è evidente grazie alla capacità di queste architetture, che coinvolgono gruppi di dispositivi capaci di interagire tra loro, di risolvere compiti complessi sfruttando calcoli e comunicazioni locali. Sebbene tutti i sistemi multi-agenti si caratterizzino per scalabilità, robustezza e autonomia, queste architetture possono essere distinte in base alle proprietà degli elementi che le compongono. In questa tesi si considerano tre tipi di sistemi multi-agenti e per ciascuno di questi sono proposte soluzioni distribuite e innovative volte a risolvere problemi tipici per gli ambienti smart. Reti di Sensori Wireless - La prima parte della tesi è incentrata sullo sviluppo di efficaci strategie di clustering per le reti di sensori wireless impiegate in ambito industriale. Tenendo conto sia dei dati acquisiti che della topologia di rete, sono proposti due algoritmi (uno centralizzato e uno distribuito) volti a raggruppare i nodi in clusters locali non sovrapposti per migliorare le capacità di auto-organizzazione del sistema. Sistemi Multi-Camera - La seconda parte della tesi affronta il problema di videosorveglianza nel contesto di reti di sensori visivi intelligenti. In primo luogo, è considerata la stima di assetto che prevede la ricostruzione dell’orientamento di ogni agente appartenente al sistema rispetto ad un sistema globale inerziale. In seguito, è affrontato il problema di pattugliamento perimetrale, secondo il quale i confini di una certa area devono essere ripetutamente monitorati da un insieme di videocamere. Entrambe le problematiche sono trattate nell’ambito dell’ottimizzazione distribuita e risolte attraverso la minimizzazione iterativa di un’adeguata funzione costo. Formazioni di Piattaforme Aeree - La terza parte della tesi è dedicata alle piattaforme aeree autonome. Concentrandosi sul singolo veicolo, sono valutate due proprietà, ovvero la capacità di controllare indipendentemente la posizione e l’assetto e la robustezza rispetto alla perdita di un motore. Sono quindi descritti due controllori non lineari che mirano a mantenere una data piattaforma in hovering statico in posizione fissa con orien- tamento costante. Infine, l’attenzione è volta agli stormi di piattaforme aeree, studiando sia la stabilizzazione di una determinata formazione che il controllo del movimento lungo direzioni prefissate. A tal fine viene studiata la teoria della bearing rigidità per i sistemi che evolvono nello spazio speciale euclideo tri-dimensionale. La tesi evolve dunque dallo studio di sistemi multi-agenti fissi a totalmente attuati usati in applicazioni per ambienti smart in cui il numero di gradi di libertà da gestire è incrementale.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Perrot, Clément. „Imagerie directe de systèmes planétaires avec SPHERE et prédiction des performances de MICADO sur l’E-ELT“. Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC212/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s'inscrit dans la thématique de l'étude de la formation et de l'évolution des systèmes planétaire grâce à la méthode de l'imagerie à haut contraste, aussi appelée imagerie directe, par comparaison aux méthodes de détection dites "indirectes". Le travail que je présente dans ce manuscrit s'articule en deux parties bien distinctes. La première partie concerne la composante observationnel de ma thèse, à l'aide de l'instrument SPHERE installé au Very Large Telescope, au sein du consortium du même nom. L'instrument SPHERE a pour objectif la détection et la caractérisation de jeunes et massives exoplanètes mais également de disques circumstellaires allant des très jeune disques protoplanétaires aux disques de débris, plus âgés. Ainsi, je présente dans ce manuscrit ma contribution au programme SHINE, un grand relevé de 200 nuits dont le but est la détection de nouvelles exoplanètes ainsi que la caractérisation spectrale et orbitale des quelques compagnons déjà connus. J'y présente également les deux études de disques circumstellaires que j'ai réalisées, autour des étoiles HD 141569 et HIP 86598. La première étude ayant permis la découverte d'anneaux concentriques à quelques dizaine d'UA de l'étoile ainsi que de asymétrie dans le flux du disque inhabituelle. La seconde étude porte sur la découverte d'un disque de débris présentant également une asymétrie en flux inhabituelle. La deuxième partie concerne la composante instrumentale de mon travail de thèse, au sein du consortium MICADO, en charge de la conception de la caméra du même nom qui sera l'un des instruments de première lumière de l'Extremely Large Telescope Européen (ELT). Dans ce manuscrit, je présente l'étude que j'ai menée afin de définir le design de certain composant du mode coronographique de MICADO tout en tenant compte des contraintes de l'instrument qui n'est pas dédié à l'imagerie haut contraste, contrairement à SPHERE
This thesis is performed in the context of the study of the formation and evolution of planetary systems using high contrast imaging, also known as direct imaging in contrast to so-called "indirect" detection methods. The work I present in this manuscript is divided into two distinct parts.The first part concerns the observational component of my thesis, using the SPHERE instrument installed at Very LargeTelescope. This work was done as part of the consortium of the same name. The purpose of the SPHERE instrument is to detect and characterize young and massive exoplanets, but also circumstellar disks ranging from very young protoplanetary disks to older debris disks. In this manuscript, I present my contribution to the program SHINE, a large survey with an integration time of 200 nights' worth of observation, the goal of which is the detection of new exoplanets and the spectral and orbital characterization of some previously-known companions. I also present the two studies of circumstellar disks that I made, around the stars HD 141569 and HIP 86598. The first study allowed the discovery of concentric rings at about ten AU of the star along with an unusual flux asymmetry in the disk. The second study is about the discovery of a debris disk that also has an unusual flux asymmetry. The second part concerns the instrumental component of my thesis work done within the MICADO consortium, in charge of the design of the camera of the same name which will be one of the first light instruments of the European Extremely Large Telescope (ELT). In this manuscript, I present the study in which I define the design of some components of the coronagraphic mode of MICADO while taking into account the constraints of the instrument - which is not dedicated to high contrast imaging, unlike SPHERE
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Lamprecht, Bernhard [Verfasser]. „A Testbed for Vision-based Advanced Driver Assistance Systems with Special Emphasis on Multi-Camera Calibration and Depth Perception / Bernhard Lamprecht“. Aachen : Shaker, 2008. http://d-nb.info/1161303995/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Esparza, García José Domingo [Verfasser], und Bernd [Akademischer Betreuer] Jähne. „3D Reconstruction for Optimal Representation of Surroundings in Automotive HMIs, Based on Fisheye Multi-Camera Systems / José Domingo Esparza García ; Betreuer: Bernd Jähne“. Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180501810/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Šolony, Marek. „Lokalizace objektů v prostoru“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236626.

Der volle Inhalt der Quelle
Annotation:
Virtual reality systems are nowadays common part of many research institutes due to its low cost and effective visualization of data. They mostly allow visualization and exploration of virtual worlds, but many lack user interaction. In this paper we suggest multi-camera optical system, which allows effective user interaction, thereby increasing immersion of virtual system. This paper describes the calibration process of multiple cameras using point correspondences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Macknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Auvinet, Edouard. „Analyse d’information tridimensionnelle issue de systèmes multi-caméras pour la détection de la chute et l’analyse de la marche“. Thèse, Rennes 2, 2012. http://hdl.handle.net/1866/9770.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s’intéresse à définir de nouvelles méthodes cliniques d’investigation permettant de juger de l’impact de l’avance en âge sur la motricité. En particulier, cette thèse se focalise sur deux principales perturbations possibles lors de l’avance en âge : la chute et l’altération de la marche.Ces deux perturbations motrices restent encore mal connues et leur analyse en clinique pose de véritables défis technologiques et scientifiques. Dans cette thèse, nous proposons des méthodes originales de détection qui peuvent être utilisées dans la vie courante ou en clinique, avec un minimum de contraintes techniques. Dans une première partie, nous abordons le problème de la détection de la chute à domicile, qui a été largement traité dans les années précédentes. En particulier, nous proposons une approche permettant d’exploiter le volume du sujet, reconstruit à partir de plusieurs caméras calibrées. Ces méthodes sont généralement très sensibles aux occultations qui interviennent inévitablement dans le domicile et nous proposons donc une approche originale beaucoup plus robuste à ces occultations. L’efficacité et le fonctionnement en temps réel ont été validés sur plus d’une vingtaine de vidéos de chutes et de leurres, avec des résultats approchant les 100% de sensibilité et de spécificité en utilisant 4 caméras ou plus. Dans une deuxième partie, nous allons un peu plus loin dans l’exploitation des volumes reconstruits d’une personne, lors d’une tâche motrice particulière : la marche sur tapis roulant, dans un cadre de diagnostic clinique. Dans cette partie, nous analysons plus particulièrement la qualité de la marche. Pour cela nous développons le concept d’utilisation de caméras de profondeur pour la quantification de l’asymétrie spatiale au cours du mouvement des membres inférieurs pendant la marche. Après avoir détecté chaque pas dans le temps, cette méthode réalise une comparaison de surfaces de chaque jambe avec sa correspondante symétrique du pas opposé. La validation effectuée sur une cohorte de 20 sujets montre la viabilité de la démarche.
This thesis is concerned with defining new clinical investigation method to assess the impact of ageing on motricity. In particular, this thesis focuses on two main possible disturbance during ageing : the fall and walk impairment. This two motricity disturbances still remain unclear and their clinical analysis presents real scientist and technological challenges. In this thesis, we propose novel measuring methods usable in everyday life or in the walking clinic, with a minimum of technical constraints. In the first part, we address the problem of fall detection at home, which was widely discussed in previous years. In particular, we propose an approach to exploit the subject’s volume, reconstructed from multiple calibrated cameras. These methods are generally very sensitive to occlusions that inevitably occur in the home and we therefore propose an original approach much more robust to these occultations. The efficiency and real-time operation has been validated on more than two dozen videos of falls and lures, with results approaching 100 % sensitivity and specificity with at least four or more cameras. In the second part, we go a little further in the exploitation of reconstructed volumes of a person at a particular motor task : the treadmill, in a clinical diagnostic. In this section we analyze more specifically the quality of walking. For this we develop the concept of using depth camera for the quantification of the spatial and temporal asymmetry of lower limb movement during walking. After detecting each step in time, this method makes a comparison of surfaces of each leg with its corresponding symmetric leg in the opposite step. The validation performed on a cohort of 20 subjects showed the viability of the approach.
Réalisé en cotutelle avec le laboratoire M2S de Rennes 2
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Mhiri, Rawia. „Approches 2D/2D pour le SFM à partir d'un réseau de caméras asynchrones“. Thesis, Rouen, INSA, 2015. http://www.theses.fr/2015ISAM0014/document.

Der volle Inhalt der Quelle
Annotation:
Les systèmes d'aide à la conduite et les travaux concernant le véhicule autonome ont atteint une certaine maturité durant ces dernières aimées grâce à l'utilisation de technologies avancées. Une étape fondamentale pour ces systèmes porte sur l'estimation du mouvement et de la structure de l'environnement (Structure From Motion) pour accomplir plusieurs tâches, notamment la détection d'obstacles et de marquage routier, la localisation et la cartographie. Pour estimer leurs mouvements, de tels systèmes utilisent des capteurs relativement chers. Pour être commercialisés à grande échelle, il est alors nécessaire de développer des applications avec des dispositifs bas coûts. Dans cette optique, les systèmes de vision se révèlent une bonne alternative. Une nouvelle méthode basée sur des approches 2D/2D à partir d'un réseau de caméras asynchrones est présentée afin d'obtenir le déplacement et la structure 3D à l'échelle absolue en prenant soin d'estimer les facteurs d'échelle. La méthode proposée, appelée méthode des triangles, se base sur l'utilisation de trois images formant un triangle : deux images provenant de la même caméra et une image provenant d'une caméra voisine. L'algorithme admet trois hypothèses: les caméras partagent des champs de vue communs (deux à deux), la trajectoire entre deux images consécutives provenant d'une même caméra est approximée par un segment linéaire et les caméras sont calibrées. La connaissance de la calibration extrinsèque entre deux caméras combinée avec l'hypothèse de mouvement rectiligne du système, permet d'estimer les facteurs d'échelle absolue. La méthode proposée est précise et robuste pour les trajectoires rectilignes et présente des résultats satisfaisants pour les virages. Pour affiner l'estimation initiale, certaines erreurs dues aux imprécisions dans l'estimation des facteurs d'échelle sont améliorées par une méthode d'optimisation : un ajustement de faisceaux local appliqué uniquement sur les facteurs d'échelle absolue et sur les points 3D. L'approche présentée est validée sur des séquences de scènes routières réelles et évaluée par rapport à la vérité terrain obtenue par un GPS différentiel. Une application fondamentale dans les domaines d'aide à la conduite et de la conduite automatisée est la détection de la route et d'obstacles. Pour un système asynchrone, une première approche pour traiter cette application est présentée en se basant sur des cartes de disparité éparses
Driver assistance systems and autonomous vehicles have reached a certain maturity in recent years through the use of advanced technologies. A fundamental step for these systems is the motion and the structure estimation (Structure From Motion) that accomplish several tasks, including the detection of obstacles and road marking, localisation and mapping. To estimate their movements, such systems use relatively expensive sensors. In order to market such systems on a large scale, it is necessary to develop applications with low cost devices. In this context, vision systems is a good alternative. A new method based on 2D/2D approaches from an asynchronous multi-camera network is presented to obtain the motion and the 3D structure at the absolute scale, focusing on estimating the scale factors. The proposed method, called Triangle Method, is based on the use of three images forming a. triangle shape: two images from the same camera and an image from a neighboring camera. The algorithrn has three assumptions: the cameras share common fields of view (two by two), the path between two consecutive images from a single camera is approximated by a line segment, and the cameras are calibrated. The extrinsic calibration between two cameras combined with the assumption of rectilinear motion of the system allows to estimate the absolute scale factors. The proposed method is accurate and robust for straight trajectories and present satisfactory results for curve trajectories. To refine the initial estimation, some en-ors due to the inaccuracies of the scale estimation are improved by an optimization method: a local bundle adjustment applied only on the absolute scale factors and the 3D points. The presented approach is validated on sequences of real road scenes, and evaluated with respect to the ground truth obtained through a differential GPS. Finally, another fundamental application in the fields of driver assistance and automated driving is road and obstacles detection. A method is presented for an asynchronous system based on sparse disparity maps
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Castanheiro, Letícia Ferrari. „Geometric model of a dual-fisheye system composed of hyper-hemispherical lenses /“. Presidente Prudente, 2020. http://hdl.handle.net/11449/192117.

Der volle Inhalt der Quelle
Annotation:
Orientador: Antonio Maria Garcia Tommaselli
Resumo: A combinação de duas lentes com FOV hiper-hemisférico em posição opostas pode gerar um sistema omnidirecional (FOV 360°) leve, compacto e de baixo custo, como Ricoh Theta S e GoPro Fusion. Entretanto, apenas algumas técnicas e modelos matemáticos para a calibração um sistema com duas lentes hiper-hemisféricas são apresentadas na literatura. Nesta pesquisa, é avaliado e definido um modelo geométrico para calibração de sistemas omnidirecionais compostos por duas lentes hiper-hemisféricas e apresenta-se algumas aplicações com esse tipo de sistema. A calibração das câmaras foi realizada no programa CMC (calibração de múltiplas câmeras) utilizando imagens obtidas a partir de vídeos feitos com a câmara Ricoh Theta S no campo de calibração 360°. A câmara Ricoh Theta S é composto por duas lentes hiper-hemisféricas fisheye que cobrem 190° cada uma. Com o objetivo de avaliar as melhorias na utilização de pontos em comum entre as imagens, dois conjuntos de dados de pontos foram considerados: (1) apenas pontos no campo hemisférico, e (2) pontos em todo o campo de imagem (isto é, adicionar pontos no campo de imagem hiper-hemisférica). Primeiramente, os modelos ângulo equisólido, equidistante, estereográfico e ortogonal combinados com o modelo de distorção Conrady-Brown foram testados para a calibração de um sensor da câmara Ricoh Theta S. Os modelos de ângulo-equisólido e estereográfico apresentaram resultados melhores do que os outros modelos. Portanto, esses dois modelos de projeção for... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The arrangement of two hyper-hemispherical fisheye lenses in opposite position can design a light weight, small and low-cost omnidirectional system (360° FOV), e.g. Ricoh Theta S and GoPro Fusion. However, only a few techniques are presented in the literature to calibrate a dual-fisheye system. In this research, a geometric model for dual-fisheye system calibration was evaluated, and some applications with this type of system are presented. The calibrating bundle adjustment was performed in CMC (calibration of multiple cameras) software by using the Ricoh Theta video frames of the 360° calibration field. The Ricoh Theta S system is composed of two hyper-hemispherical fisheye lenses with 190° FOV each one. In order to evaluate the improvement in applying points in the hyper-hemispherical image field, two data set of points were considered: (1) observations that are only in the hemispherical field, and (2) points in all image field, i.e. adding points in the hyper-hemispherical image field. First, one sensor of the Ricoh Theta S system was calibrated in a bundle adjustment based on the equidistant, equisolid-angle, stereographic and orthogonal models combined with Conrady-Brown distortion model. Results showed that the equisolid-angle and stereographic models can provide better solutions than those of the others projection models. Therefore, these two projection models were implemented in a simultaneous camera calibration, in which the both Ricoh Theta sensors were considered i... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Howard, Shaun Michael. „Deep Learning for Sensor Fusion“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1495751146601099.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Vestin, Albin, und Gustav Strandberg. „Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms“. Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Der volle Inhalt der Quelle
Annotation:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Bélanger, Lucie. „Calibration de systèmes de caméras et projecteurs dans des applications de création multimédia“. Thèse, 2009. http://hdl.handle.net/1866/3864.

Der volle Inhalt der Quelle
Annotation:
Ce mémoire s'intéresse à la vision par ordinateur appliquée à des projets d'art technologique. Le sujet traité est la calibration de systèmes de caméras et de projecteurs dans des applications de suivi et de reconstruction 3D en arts visuels et en art performatif. Le mémoire s'articule autour de deux collaborations avec les artistes québécois Daniel Danis et Nicolas Reeves. La géométrie projective et les méthodes de calibration classiques telles que la calibration planaire et la calibration par géométrie épipolaire sont présentées pour introduire les techniques utilisées dans ces deux projets. La collaboration avec Nicolas Reeves consiste à calibrer un système caméra-projecteur sur tête robotisée pour projeter des vidéos en temps réel sur des écrans cubiques mobiles. En plus d'appliquer des méthodes de calibration classiques, nous proposons une nouvelle technique de calibration de la pose d'une caméra sur tête robotisée. Cette technique utilise des plans elliptiques générés par l'observation d'un seul point dans le monde pour déterminer la pose de la caméra par rapport au centre de rotation de la tête robotisée. Le projet avec le metteur en scène Daniel Danis aborde les techniques de calibration de systèmes multi-caméras. Pour son projet de théâtre, nous avons développé un algorithme de calibration d'un réseau de caméras wiimotes. Cette technique basée sur la géométrie épipolaire permet de faire de la reconstruction 3D d'une trajectoire dans un grand volume à un coût minime. Les résultats des techniques de calibration développées sont présentés, de même que leur utilisation dans des contextes réels de performance devant public.
This thesis focuses on computer vision applications for technological art projects. Camera and projector calibration is discussed in the context of tracking applications and 3D reconstruction in visual arts and performance art. The thesis is based on two collaborations with québécois artists Daniel Danis and Nicolas Reeves. Projective geometry and classical camera calibration techniques, such as planar calibration and calibration from epipolar geometry, are detailed to introduce the techniques implemented in both artistic projects. The project realized in collaboration with Nicolas Reeves consists of calibrating a pan-tilt camera-projector system in order to adapt videos to be projected in real time on mobile cubic screens. To fulfil the project, we used classical camera calibration techniques combined with our proposed camera pose calibration technique for pan-tilt systems. This technique uses elliptic planes, generated by the observation of a point in the scene while the camera is panning, to compute the camera pose in relation to the rotation centre of the pan-tilt system. The project developed in collaboration with Daniel Danis is based on multi-camera calibration. For this studio theatre project, we developed a multi-camera calibration algorithm to be used with a wiimote network. The technique based on epipolar geometry allows 3D reconstruction of a trajectory in a large environment at a low cost. The results obtained from the camera calibration techniques implemented are presented alongside their application in real public performance contexts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Kim, Jae-Hak. „Camera Motion Estimation for Multi-Camera Systems“. Phd thesis, 2008. http://hdl.handle.net/1885/49364.

Der volle Inhalt der Quelle
Annotation:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. ...
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Chen, Guan-Ting, und 陳冠廷. „Bandwidth Expansion in Camera Communication Systems with Multi-camera Receiver“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/73241495685978357225.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣大學
資訊工程學研究所
103
This thesis proposed a system that improves the throughput of a camera- based visible light communication (VLC) system by using two or more Com- plementary Mental-Oxide-Semiconductor (CMOS) rolling shutter cameras instead of one. VLC is a new data transmission technology which uses the optical signal to transmit digital information by controlling the LED’s blink- ing frequency. In a single-camera system, the highest usable frequency of the transmitted signal is limited by the Nyquist rate, determined by the read-out duration of the rolling shutter mechanism. In this work, we lift this limitation by using two or more cameras, enabling the use of signal frequency higher than the Nyquist rate. This allows us to use a larger number of frequencies, i.e., a higher modulation order, and improves the system throughput. Poten- tially, the developed technique can be used in advanced driver assistance sys- tem (ADAS), indoor positioning, and augmented reality systems using VLC. In our proposed system, we use rolling shutter cameras with different image sensors as the receiver. Due to the different capture rates of the cameras, the highest frequency which camera can determine is also different. When the blinking frequency exceeds the maximum value that Nyquist frequency, it will be misjudged as a lower frequency by camera. In this thesis, we propose a scheme to obtain the correct value at high frequency by different misjudged low-frequency values of each camera. To evaluate the feasibility of this scheme, we use software- defined radio (SDR) to implement the transmitter and off-the-shelf experi- mental cameras as the receiver. We expect this technology can be widely used for a wide range of applications in the future. We believe that this tech- nology can be generalized and used for a wider range of applications in the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Chen, Chung Hao. „Automated Surveillance Systems with Multi-Camera and Robotic Platforms“. 2009. http://trace.tennessee.edu/utk_graddiss/20.

Der volle Inhalt der Quelle
Annotation:
This dissertation addresses automated surveillance systems focusing on four topics: (1) spatial mappings of omnidirectional and PTZ cameras, and PTZ and PTZ cameras; (2) target hopping application for dual camera systems; (3) camera handoff and placement; (4) the mobile tracking platform. The four topics represent four contributions in this dissertation. Dual camera systems have been widely used in surveillance because of the ability to explore the wide field of view (FOV) of the omnidirectional camera and the wide zoom range of the PTZ camera. Most existing algorithms require a priori knowledge of the projection models of omnidirectional and PTZ cameras to solve the spatial mapping between any two cameras. The proposed methods not only improve the mapping accuracy by reducing the dependence on the knowledge of the projection model but also improved flexibility in adjusting to varying system configurations. The omnidirectional camera is capable of multi object tracking while the PTZ camera is able to track one individual target at one time to maintain the required resolution. It becomes necessary for the PTZ camera to distribute its observation time among multiple objects and visit them in sequence. In comparison with the sequential visiting and nearest neighbor methods, the proposed adaptive algorithm requires less computational and visiting time. Tracking with multiple cameras is mainly the consistent labeling or camera handoff problem. An automatic calibration procedure combined with Wilcoxon Signed-Rank Test is proposed to solve the consistent labeling problem. Meanwhile, we introduce an additional constraint to search for optimal cameras‘ overlapped field of views (FOVs) and resource management approach to improve camera handoff performance. Experiments show that our proposed camera handoff and placement can outperform existing approaches. However, in the majority of surveillance systems, their cameras are stationary. These stationary systems often require the desired object to stay within the surveillance range of the system. Thus, the robotic platform we propose uses a visual camera to sense the movement of the desired object and a range sensor to help the robot detect and then avoid obstacles in real time while continuing to track and follow the desired object. Experiment shows this robotic and intelligent system can fulfill the requirements of tracking an object and avoiding obstacles simultaneously when the object moves in speed of 4 km/hr.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Che-YungHung und 洪哲詠. „Camera-assisted Calibration Techniques for Merging Multi-projector Systems“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/97628708470738795905.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Chen, Wei-Jen, und 陳威任. „Video Recording Scheduling Algorithms for Real Time Multi-Camera Surveillance Systems“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/27932006256583392594.

Der volle Inhalt der Quelle
Annotation:
碩士
輔仁大學
電機工程學系碩士班
100
In a multi-channel video surveillance system, due to the limit of storage capacity and processing speed not all but only a subset of frames from the multiple channels can be recorded. Since each channel may have different recording frame rate. It is required that the recorded frames from the same channel should have equal temporal distance between any two consecutive frames. We define two cost functions to evaluate the scheduling quality. The first one is to minimize the summation of distance jitter of all channels as well as meet real time requirement. The second one is to minimize the same cost while the distance jitter for any channel must not be greater than a given bound. These two problems are formulated as a zero-one integer linear programming problem. For resource-constrained embedded systems, we propose several different scheduling algorithms to obtain solutions efficiently. The proposed algorithms were implemented in C language. Experimental results show the comparison of scheduling quality among different algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Schacter, David. „Multi-Camera Active-vision System Reconfiguration for Deformable Object Motion Capture“. Thesis, 2014. http://hdl.handle.net/1807/44060.

Der volle Inhalt der Quelle
Annotation:
To improve the accuracy in capturing the motion of deformable objects, a reconfigurable multi-camera active-vision system which can dynamically reposition its cameras online is proposed, and a design for such a system, along with a methodology to select the near-optimal positions and orientations of the set of cameras, is presented. The active-vision system accounts for the deformation of the object-of-interest by tracking triangulated vertices in order to predict the shape of the object at subsequent demand instants. It then selects a system configuration that minimizes the expected error in the recovered position of each of these vertices. Extensive simulations and experiments have verified that using the proposed reconfigurable system to both translate and rotate cameras to near-optimal poses is tangibly superior to using cameras which are either static, or can only rotate, in minimizing the error in recovered vertex positions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Ren, You-Lin, und 任宥霖. „The Integration of Coordinate Systems from Multi-View Camera Groups for Shape-From-Silhouette Technique“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/r55jgt.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中央大學
機械工程學系
106
This study develops a process of the integration of coordinate systems from multi-view camera groups for shape-from-silhouette (SFS) technique. The popular 3D modeling technique which based on the SFS method usually through the rotatory table to obtain geometry and color information of object. However, the rotatory table only rotate in one axis, and it causes that the object has the limitation of the shooting angle especially at the top/bottom view. In SFS method, this limitation leads the artifacts of 3D model generated at the top/bottom. If the object can tip over, reposition on the rotatory table, and retake the images, the missing information of 3D model from top/bottom view could be replenished. In order to integrate the entire silhouette data taken from different views into a single coordinate system, this study develops an alignment by image matching (AIM) algorithm to establish the spatial distribution of all camera positions. In this algorithm, the silhouette data obtained in tipped positions is setting as targets. The 3D model transforms into a predicted positon to simulate one of tipped positions and projects the shape onto the imaging plane of the camera to obtain the predicted silhouette data as a subject. Then, this subject silhouette data will make the comparison with corresponding target. The AIM algorithm used to minimize the difference between these two data and calculate the corresponding translation and rotation of the subject needed to adjust in 3D space. When the sum of differences in all tipped positions is minimum, all camera position (in auxiliary views) can integrate into a coordinate system of primary view. A complete 3D model can be rebuilt by the SFS method with all silhouette data in all views. At last, this study will demonstrate three examples which were rebuilt by the development of process of the integration of coordinate systems from multi-view camera groups for shapefrom- silhouette technique to verify our proposed process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lu, Ming-Kun, und 呂鳴崑. „Multi-Camera Vision-based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Human Computer Interactive Systems“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/vgss3p.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺北科技大學
資訊工程系研究所
100
Nowadays, multi-touch technology has become a popular issue. Multi-touch has been implemented in several ways including resistive type, capacitive type and so on. However, because of limitations, multi-touch by these implementations cannot support large screens. Therefore, this thesis proposes a multi-camera vision-based finger detection, tracking, and event identification techniques for multi-touch sensing with implementation. The proposed system detects the multi-finger pressing on an acrylic board by capturing the infrared light through four infrared cameras. The captured infrared points, which are equivalent to the multi-finger touched points, can be used for input equipments and supply man-computer interface with convenience. Additionally, the proposed system is a multi-touch sensing with computer vision technology. Compared with the conventional touch technology, multi-touch technology allows users to input complex commands. The proposed multi-touch point detection algorithm identifies the multi-finger touched points by using the bright object segmentation techniques. The extracted bright objects are then tracked, and the trajectories of objects are recorded. Furthermore, the system will analyze the trajectories of objects and identify the corresponding events pre-defined in the proposed system. For applications, this thesis wants to provide a simple human-computer interface with easy operation. Users can access and input commands by touch and move fingers. Besides, the proposed system is implemented with a table-sized screen, which can support multi-user interaction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Betrabet, Siddhant S. „Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3d Lidar and Multi-Camera Setup“. Thesis, 2020. http://hdl.handle.net/1805/24776.

Der volle Inhalt der Quelle
Annotation:
Indiana University-Purdue University Indianapolis (IUPUI)
Analyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate the movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects. These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy. The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

(9708467), Siddhant Srinath Betrabet. „Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3D LIDAR and Multi-Camera Setup“. Thesis, 2021.

Den vollen Inhalt der Quelle finden
Annotation:

Analyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects.

These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy.

The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.


APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Parnian, Neda. „Integration of Local Positioning System & Strapdown Inertial Navigation System for Hand-Held Tool Tracking“. Thesis, 2008. http://hdl.handle.net/10012/4043.

Der volle Inhalt der Quelle
Annotation:
This research concerns the development of a smart sensory system for tracking a hand-held moving device to millimeter accuracy, for slow or nearly static applications over extended periods of time. Since different operators in different applications may use the system, the proposed design should provide the accurate position, orientation, and velocity of the object without relying on the knowledge of its operation and environment, and based purely on the motion that the object experiences. This thesis proposes the design of the integration a low-cost Local Positioning System (LPS) and a low-cost StrapDown Inertial Navigation System (SDINS) with the association of the modified EKF to determine 3D position and 3D orientation of a hand-held tool within a required accuracy. A hybrid LPS/SDINS combines and complements the best features of two different navigation systems, providing a unique solution to track and localize a moving object more precisely. SDINS provides continuous estimates of all components of a motion, but SDINS loses its accuracy over time because of inertial sensors drift and inherent noise. LPS has the advantage that it can possibly get absolute position and velocity independent of operation time; however, it is not highly robust, is computationally quite expensive, and exhibits low measurement rate. This research consists of three major parts: developing a multi-camera vision system as a reliable and cost-effective LPS, developing a SDINS for a hand-held tool, and developing a Kalman filter for sensor fusion. Developing the multi-camera vision system includes mounting the cameras around the workspace, calibrating the cameras, capturing images, applying image processing algorithms and features extraction for every single frame from each camera, and estimating the 3D position from 2D images. In this research, the specific configuration for setting up the multi-camera vision system is proposed to reduce the loss of line of sight as much as possible. The number of cameras, the position of the cameras with respect to each other, and the position and the orientation of the cameras with respect to the center of the world coordinate system are the crucial characteristics in this configuration. The proposed multi-camera vision system is implemented by employing four CCD cameras which are fixed in the navigation frame and their lenses placed on semicircle. All cameras are connected to a PC through the frame grabber, which includes four parallel video channels and is able to capture images from four cameras simultaneously. As a result of this arrangement, a wide circular field of view is initiated with less loss of line-of-sight. However, the calibration is more difficult than a monocular or stereo vision system. The calibration of the multi-camera vision system includes the precise camera modeling, single camera calibration for each camera, stereo camera calibration for each two neighboring cameras, defining a unique world coordinate system, and finding the transformation from each camera frame to the world coordinate system. Aside from the calibration procedure, digital image processing is required to be applied into the images captured by all four cameras in order to localize the tool tip. In this research, the digital image processing includes image enhancement, edge detection, boundary detection, and morphologic operations. After detecting the tool tip in each image captured by each camera, triangulation procedure and optimization algorithm are applied in order to find its 3D position with respect to the known navigation frame. In the SDINS, inertial sensors are mounted rigidly and directly to the body of the tracking object and the inertial measurements are transformed computationally to the known navigation frame. Usually, three gyros and three accelerometers, or a three-axis gyro and a three-axis accelerometer are used for implementing SDINS. The inertial sensors are typically integrated in an inertial measurement unit (IMU). IMUs commonly suffer from bias drift, scale-factor error owing to non-linearity and temperature changes, and misalignment as a result of minor manufacturing defects. Since all these errors lead to SDINS drift in position and orientation, a precise calibration procedure is required to compensate for these errors. The precision of the SDINS depends not only on the accuracy of calibration parameters but also on the common motion-dependent errors. The common motion-dependent errors refer to the errors caused by vibration, coning motion, sculling, and rotational motion. Since inertial sensors provide the full range of heading changes, turn rates, and applied forces that the object is experiencing along its movement, accurate 3D kinematics equations are developed to compensate for the common motion-dependent errors. Therefore, finding the complete knowledge of the motion and orientation of the tool tip requires significant computational complexity and challenges relating to resolution of specific forces, attitude computation, gravity compensation, and corrections for common motion-dependent errors. The Kalman filter technique is a powerful method for improving the output estimation and reducing the effect of the sensor drift. In this research, the modified EKF is proposed to reduce the error of position estimation. The proposed multi-camera vision system data with cooperation of the modified EKF assists the SDINS to deal with the drift problem. This configuration guarantees the real-time position and orientation tracking of the instrument. As a result of the proposed Kalman filter, the effect of the gravitational force in the state-space model will be removed and the error which results from inaccurate gravitational force is eliminated. In addition, the resulting position is smooth and ripple-free. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. If the sampling rate of the vision system decreases from 20 fps to 5 fps, the errors are still acceptable for many applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

„The Effects of a Multi-View Camera System on Spatial Cognition, Cognitive Workload and Performance in a Minimally Invasive Surgery Task“. Master's thesis, 2019. http://hdl.handle.net/2286/R.I.53914.

Der volle Inhalt der Quelle
Annotation:
abstract: Minimally invasive surgery is a surgical technique that is known for its reduced patient recovery time. It is a surgical procedure done by using long reached tools and an endoscopic camera to operate on the body though small incisions made near the point of operation while viewing the live camera feed on a nearby display screen. Multiple camera views are used in various industries such as surveillance and professional gaming to allow users a spatial awareness advantage as to what is happening in the 3D space that is presented to them on 2D displays. The concept has not effectively broken into the medical industry yet. This thesis tests a multi-view camera system in which three cameras are inserted into a laparoscopic surgical training box along with two surgical instruments, to determine the system impact on spatial cognition, perceived cognitive workload, and the overall time needed to complete the task, compared to one camera viewing the traditional set up. The task is a non-medical task and is one of five typically used to train surgeons’ motor skills when initially learning minimally invasive surgical procedures. The task is a peg transfer and will be conducted by 30 people who are randomly assigned to one of two conditions; one display and three displays. The results indicated that when three displays were present the overall time initially using them to complete a task was slower; the task was perceived to be completed more easily and with less strain; and participants had a slightly higher performance rate.
Dissertation/Thesis
Masters Thesis Human Systems Engineering 2019
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

(8781872), Yaan Zhang. „Improvement of Structured Light Systems Using Computer Vision Techniques“. Thesis, 2020.

Den vollen Inhalt der Quelle finden
Annotation:

In this thesis work, we propose computer vision techniques for 3D reconstruction and object height measurement using a single camera and multi-laser emitters, which have an intersection on the projected image plane. Time-division and color division methods are first investigated for our structured light system. Although the color division method offers better accuracy for object height measurement, it requires the laser emitters equipped with different color lights. Furthermore, the color division method is sensitive to light exposure in the measurement environment. Next, a new multi-level random sample consensus (MLRANSAC) algorithm has been developed. The proposed MLRANSAC method not only offers high accuracy for object height measurement but also eliminates the requirement for the laser emitters with different colors. Our experiment results have validated the effectiveness of the MLRANSAC algorithm.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Mauricio, Emanuel Adelino Ferreira. „Localização de capsula endoscópica utilizando informação visual“. Master's thesis, 2018. http://hdl.handle.net/10316/86666.

Der volle Inhalt der Quelle
Annotation:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia
A utilidade das imagens capturadas num exame de endoscopia por cápsula endoscópica depende não só da informação contida na imagem como também da capacidade de localizar a mesma imagem no interior do sistema digestivo. É objectivo desta dissertação estimar a pose relativa de uma cápsula endoscópica multi-câmara panorâmica a partir das imagens utilizando métodos geométricos. A cápsula utilizada foi a CapsoCam SV2 da CapsoVision que possui quatro câmaras dispostas radialmente. O modelo da câmara generalizada e a restrição epipolar generalizada (REG)foram utilizados para estimar a pose relativa entre cada frame. Foi utilizado o algoritmo dos 17 pontos, a solução iterativa de Kneip e o algoritmo dos 17 pontos RANSAC para solucionar a REG. A integração das poses relativas permite criar um modelo do 3d do sistema digestivo onde cada fotografia pode ser localizada acrescentando uma dimensão extra ao exame médico.Uma parte substancial do trabalho foi dedicada ao desenvolvimento de dois simuladores distintos de um sistema multi-câmara análogo ao utilizado pela cápsula utilizada. O primeiro simulador pretende simular a projecção de pontos 3d definidos num sistema de coordenadas global no plano de imagem normalizado, desta forma a imagem consiste em pontos 2d em que a correspondência é conhecida à priori e não é necessário trabalhar com imagens RGB. A segunda simulação gera imagens RGB utilizando as propriedades do MATLAB de modo a aproximar os dados gerados com os dados reais obtidos com a cápsula.A configuração do sistema visual dos simuladores pode ser alterada facilmente. É possível adicionar ou remover câmaras e modificar os parâmetros intrínsecos e extrínsecos de cada câmara.Foi feita uma analise detalhada das imagens capturadas pela cápsula CapsoCam SV2 no que diz respeito à densidade de pontos de interesse (features), qualidade das correspondências entre features, redundâncias na imagem, continuidade de numero de features. Foram realizados realizados vários testes de laboratório fotografando vários padrões de xadrez com dimensões conhecidas com o objectivo de calibrar cada câmara separadamente. Os parâmetros intrínsecos de cada câmara foram estimados com o resultados pouco positivos. As poses relativas estimadas obtiveram erros de projecção muito grandes o que impossibilitou o cumprimento do objectivo. Contudo a analise realizada às imagens e os simuladores criados contribuíram muito para que num futuro próximo seja possível estimar a pose com uma precisão aceitável.
The information extracted from images captured by an endoscopic capsule is as useful as the capability to locate the portion of intestine being imaged. The relative pose estimation of a multi-camera endoscopic capsule with a 360 degree panoramic field of view, using the captured images is the objective of this dissertation. We used the CapsoCam SV2 from CapsoVision as the object of study. Using the generalized camera model and the generalized epipolarconstraint (GER) to determine the relative motion of the capsule between each frame. To compute a solution to the GER we used the 17 point algorithm, the Kneip solution and the 17 point RANSAC algorithm from the openGV library. The integration of the successive relative motions and the 3d reconstruction gives us a 3d model of the gastrointestinal track and the path taken by the device. This way we can determine where each photo as been taken adding an extra dimension to the medical exam. A substantial part of this work was dedicated to the development of two different simulations of a multi-camera system analogous to the CapsoCam SV2. This first simulation works as a simple projection of 3d points, defined in a global coordinate system, projected to the normalized image plane of each camera. This way the simulated image consists of 2d points where the correspondence between frames is known and we can apply the relative pose estimation algorithms directly. The second simulation generates RGB images using the full MATLABprotective space capabilities approximating the simulated and real data. The configuration of the visual system can easily be changed in both simulators. It is possible to change the intrinsic and extrinsic parameters of each camera and the number of cameras of the system. A detailed analysis to the images taken by the CapsoCam SV2 was also in scope of this dissertation, mainly the feature density, feature matching quality, image redundancy and correspondence continuity. Several tests were undertaken to calibrate each individual camera by taking photographs of checkerboard patterns with known dimensions. Each camera intrinsic parameters were calibrated successfully. Still, the estimated relative poses resulted in huge reprojectionerrors which means we did not meet the objective. The image analysis and the simulators provide a great contribution to the advancement towards the objective.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

(6843914), Radhika Ravi. „Interactive Environment For The Calibration And Visualization Of Multi-sensor Mobile Mapping Systems“. Thesis, 2019.

Den vollen Inhalt der Quelle finden
Annotation:
LiDAR units onboard airborne and terrestrial platforms have been established as a proven technology for the acquisition of dense point clouds for a wide range of applications, such as digital building model generation, transportation corridor monitoring, precision agriculture, and infrastructure monitoring. Furthermore, integrating such systems with one or more cameras would allow forward and backward projection between imagery and LiDAR data, thus facilitating several high-level data processing activities such as reliable feature extraction and colorization of point clouds. However, the attainment of the full 3D point positioning potential of such systems is contingent on an accurate calibration of the mobile mapping unit as a whole.
This research aims at proposing a calibration procedure for terrestrial multi-unit LiDAR systems to directly estimate the mounting parameters relating several spinning multi-beam laser scanners to the onboard GNSS/INS unit in order to derive point clouds with high positional accuracy. To ensure the accuracy of the estimated mounting parameters, an optimal configuration of target primitives and drive-runs is determined by analyzing the potential impact of bias in mounting parameters of a LiDAR unit on the resultant point cloud for different orientations of target primitives and different drive-run scenarios. This impact is also verified experimentally by simulating a bias in each mounting parameter separately. Next, the optimal configuration is used within an experimental setup to evaluate the performance of the proposed calibration procedure. Then, this proposed multi-unit LiDAR system calibration strategy is extended for multi-LiDAR multi-camera systems in order to allow a simultaneous estimation of the mounting parameters relating the different laser scanners as well as cameras to the onboard GNSS/INS unit. Such a calibration improves the registration accuracy of point clouds derived from LiDAR data and imagery, along with their accuracy with respect to the ground truth. Finally, in order to qualitatively evaluate the calibration results for a generic mobile mapping system and allow the visualization of point clouds, imagery data, and their registration quality, an interface denoted as Image-LiDAR Interactive Visualization Environment (I-LIVE) is developed. Apart from its visualization functions (such as 3D point cloud manipulation and image display/navigation), I-LIVE mainly serves as a tool for the quality control of GNSS/INS-derived trajectory and LiDAR-camera system calibration.
The proposed multi-sensor system calibration procedures are experimentally evaluated by calibrating several mobile mapping platforms with varying number of LiDAR units and cameras. For all cases, the system calibration is seen to attain accuracies better than the ones expected based on the specifications of the involved hardware components, i.e., the LiDAR units, cameras, and GNSS/INS units.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Rizwan, Macknojia. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, 2013. http://hdl.handle.net/10393/23976.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Pal, Madhumita. „Accurate and Efficient Algorithms for Star Sensor Based Micro-Satellite Attitude and Attitude Rate Estimation“. Thesis, 2013. http://etd.iisc.ac.in/handle/2005/3428.

Der volle Inhalt der Quelle
Annotation:
This dissertation addresses novel techniques in determining gyroless micro-satellite attitude and attitude rate. The main objective of this thesis is to explore the possibility of using commercially available low cost micro-light star sensor as a stand-alone sensor for micro-satellite attitude as well as attitude rate determination. The objective is achieved by developing accurate and computationally efficient algorithms for the realization of onboard operation of a low fidelity star sensor. All the algorithms developed here are tested with the measurement noise presented in the catalog of the sensor array STAR-1000. A novel accurate second order sliding mode observer (SOSMO) is designed for discrete time uncertain linear multi-output system. Our design procedure is effective for both matched and unmatched bounded uncertain ties and/or disturbances. The bound on uncertainties and/or disturbances is assumed to be unknown. This problem is addressed in this work using the second order multiple sliding modes approach. Second order sliding manifold and corresponding sliding condition for discrete time system is defined similar on the lines of continuous counterpart. Our design is not restricted to a particular class of uncertain (matched) discrete time system. Moreover, it can handle multiple outputs unlike single out-put systems. The observer design is achieved by driving the state observation error and its first order finite difference to the vicinity of the equilibrium point (0,0) in a finite steps and maintaining them in the neighborhood thereafter. The estimation synthesis is based on Quasi Sliding Mode (QSM) design. The problem of designing sliding mode observer for a linear system subjected to unknown inputs requires observer matching condition. This condition is needed to ensure that the state estimation error is a asymptotically stable and is independent of the unknown input during the sliding motion. In the absence of a matching condition, asymptotic stability of the reduced order error dynamics on the sliding surface is not guaranteed. However, unknown bounded inputs guarantee bounded error on state estimation. The QSM design guarantees an ultimate error bound by incorporating Boundary Layer (BL) in its design procedure. The observer achieves one order of magnitude improvement in estimation accuracy than the conventional sliding mode observer (SMO) design for an unknown input. The observer estimation errors, satisfying the given stability conditions, converge to an ultimate finite bound (with in the specified BL) of O(T2), where T Is the sampling period. A relation between sliding mode gain and boundary layer is established for the existence of second order discrete sliding motion. The robustness of the proposed observer with respect to measurement noise is also analyzed. The design algorithm is very simple to apply and is implemented for two examples with different classes of disturbances (matched and unmatched) to show the effectiveness of the design. Simulation results show the robustness with respect to the measurement noise for SOSMO. Second order sliding mode observer gain can be calculated off-line and the same gain can work for large band of disturbance as long as the disturbance acting on the continuous time system is bounded and smooth. The SOSMO is simpler to implement on board compared to the other traditional nonlinear filters like Pseudo-Linear-Kalman-filter(PLKF); Extended Kalman Filter(EKF). Moreover, SMO possesses an automatic adaptation property same as optimal state estimator(like Kalman filter) with respect to the intensity of the measurement noise. The SMO rejects the noisy measurements automatically, in response to the increased noise intensity. The dynamic performance of the observer on the sliding surface can be altered and no knowledge of noise statistics is required. It is shown that the SOSMO performs more accurately than the PLKF in application to micro-satellite angular rate estimation since PLKF is not an optimal filter. A new method for estimation of satellite angular rates through derivative approach is proposed. The method is based on optic flow of star image patterns formed on a star sensor. The satellite angular rates are derived directly from the 2D-coordinates of star images. Our algorithm is computationally efficient and requires less memory allocation compared to the existing vector derivative approaches, where there is also no need for star identification. The angular rates are computed using least square solution method, based on the measurement equation obtained by optic flow of star images. These estimates are then fed into discrete time second order sliding mode observer (SOSMO). The performance of angular rate estimation by SOSMO is compared with the discrete time First order SMO and PLKF. The SOSMO gives the best estimates as compared to the other two schemes in estimating micro-satellite angular rates in all three axes. The improvement in accuracy is one order of magnitude (around1.7984 x 10−5 rad/ sec,8.9987 x 10−6 rad/ sec and1.4222 x 10−5 rad/ sec in three body axes respectively) in terms of standard deviation in steady state estimation error. A new method and algorithm is presented to determine star camera parameters along with satellite attitude with high precision even if these parameters change during long on-orbit operation. Star camera parameters and attitude need to be determined independent of each other as they both can change. An efficient, closed form solution method is developed to estimate star camera parameters (like focal length, principal point offset), lens distortions (like radial distortion) and attitude. The method is based on a two step procedure. In the first step, all parameters (except lens distortion) are estimated using a distortion free camera model. In the second step, lens distortion coefficient is estimated by linear least squares (LS) method. Here the derived camera parameters in first step are used in the camera model that incorporates distortion. However, this method requires identification of observed stars with the catalogue stars. But, on-orbit star identification is difficult as it utilizes the values of camera calibrating parameters that can change in orbit(detector and optical element alignment get change in orbit due to solar pressure or sudden temperature change) from the ground calibrated value. This difficulty is overcome by employing a camera self-calibration technique which only requires four observed stars in three consecutive image frames. Star camera parameters along with lens (radial and decentering) distortion coefficients are determined by camera self calibration technique. Finally Kalman filter is used to refine the estimated data obtained from the LS based method to improve the level of accuracy. We consider the true values of camera parameters as (u0,v0) = (512.75,511.25) pixel, f = 50.5mm; The ground calibrated values of those parameters are (u0,v0) =( 512,512) pixel, f = 50mm; Worst case radial distortion coefficient affecting the star camera lens is considered to be k1 =5 x 10−3 .Our proposed method of attitude determination achieves accuracy of the order of magnitude around 6.2288 x 10−5 rad,3.3712 x 10−5 radand5.8205 x 10−5 rad in attitude angles φ,θ and ψ. Attitude estimation by existing methods in the literature diverges from the true value since they utilize the ground calibrated values of camera parameters instead of true values. To summarize, we developed a formal theory of discrete time Second Order Sliding Mode Observer for uncertain multi-output system. Our methods achieve the desired accuracy while estimating satellite attitude and attitude rate using low fidelity star sensor data. Our methods require lower on-board processing requirement and less memory allocation; thus are suitable for micro-satellite applications. Thus, the objective of using low fidelity star sensor as stand-alone sensor in micro-satellite application is achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Pal, Madhumita. „Accurate and Efficient Algorithms for Star Sensor Based Micro-Satellite Attitude and Attitude Rate Estimation“. Thesis, 2013. http://etd.iisc.ernet.in/2005/3428.

Der volle Inhalt der Quelle
Annotation:
This dissertation addresses novel techniques in determining gyroless micro-satellite attitude and attitude rate. The main objective of this thesis is to explore the possibility of using commercially available low cost micro-light star sensor as a stand-alone sensor for micro-satellite attitude as well as attitude rate determination. The objective is achieved by developing accurate and computationally efficient algorithms for the realization of onboard operation of a low fidelity star sensor. All the algorithms developed here are tested with the measurement noise presented in the catalog of the sensor array STAR-1000. A novel accurate second order sliding mode observer (SOSMO) is designed for discrete time uncertain linear multi-output system. Our design procedure is effective for both matched and unmatched bounded uncertain ties and/or disturbances. The bound on uncertainties and/or disturbances is assumed to be unknown. This problem is addressed in this work using the second order multiple sliding modes approach. Second order sliding manifold and corresponding sliding condition for discrete time system is defined similar on the lines of continuous counterpart. Our design is not restricted to a particular class of uncertain (matched) discrete time system. Moreover, it can handle multiple outputs unlike single out-put systems. The observer design is achieved by driving the state observation error and its first order finite difference to the vicinity of the equilibrium point (0,0) in a finite steps and maintaining them in the neighborhood thereafter. The estimation synthesis is based on Quasi Sliding Mode (QSM) design. The problem of designing sliding mode observer for a linear system subjected to unknown inputs requires observer matching condition. This condition is needed to ensure that the state estimation error is a asymptotically stable and is independent of the unknown input during the sliding motion. In the absence of a matching condition, asymptotic stability of the reduced order error dynamics on the sliding surface is not guaranteed. However, unknown bounded inputs guarantee bounded error on state estimation. The QSM design guarantees an ultimate error bound by incorporating Boundary Layer (BL) in its design procedure. The observer achieves one order of magnitude improvement in estimation accuracy than the conventional sliding mode observer (SMO) design for an unknown input. The observer estimation errors, satisfying the given stability conditions, converge to an ultimate finite bound (with in the specified BL) of O(T2), where T Is the sampling period. A relation between sliding mode gain and boundary layer is established for the existence of second order discrete sliding motion. The robustness of the proposed observer with respect to measurement noise is also analyzed. The design algorithm is very simple to apply and is implemented for two examples with different classes of disturbances (matched and unmatched) to show the effectiveness of the design. Simulation results show the robustness with respect to the measurement noise for SOSMO. Second order sliding mode observer gain can be calculated off-line and the same gain can work for large band of disturbance as long as the disturbance acting on the continuous time system is bounded and smooth. The SOSMO is simpler to implement on board compared to the other traditional nonlinear filters like Pseudo-Linear-Kalman-filter(PLKF); Extended Kalman Filter(EKF). Moreover, SMO possesses an automatic adaptation property same as optimal state estimator(like Kalman filter) with respect to the intensity of the measurement noise. The SMO rejects the noisy measurements automatically, in response to the increased noise intensity. The dynamic performance of the observer on the sliding surface can be altered and no knowledge of noise statistics is required. It is shown that the SOSMO performs more accurately than the PLKF in application to micro-satellite angular rate estimation since PLKF is not an optimal filter. A new method for estimation of satellite angular rates through derivative approach is proposed. The method is based on optic flow of star image patterns formed on a star sensor. The satellite angular rates are derived directly from the 2D-coordinates of star images. Our algorithm is computationally efficient and requires less memory allocation compared to the existing vector derivative approaches, where there is also no need for star identification. The angular rates are computed using least square solution method, based on the measurement equation obtained by optic flow of star images. These estimates are then fed into discrete time second order sliding mode observer (SOSMO). The performance of angular rate estimation by SOSMO is compared with the discrete time First order SMO and PLKF. The SOSMO gives the best estimates as compared to the other two schemes in estimating micro-satellite angular rates in all three axes. The improvement in accuracy is one order of magnitude (around1.7984 x 10−5 rad/ sec,8.9987 x 10−6 rad/ sec and1.4222 x 10−5 rad/ sec in three body axes respectively) in terms of standard deviation in steady state estimation error. A new method and algorithm is presented to determine star camera parameters along with satellite attitude with high precision even if these parameters change during long on-orbit operation. Star camera parameters and attitude need to be determined independent of each other as they both can change. An efficient, closed form solution method is developed to estimate star camera parameters (like focal length, principal point offset), lens distortions (like radial distortion) and attitude. The method is based on a two step procedure. In the first step, all parameters (except lens distortion) are estimated using a distortion free camera model. In the second step, lens distortion coefficient is estimated by linear least squares (LS) method. Here the derived camera parameters in first step are used in the camera model that incorporates distortion. However, this method requires identification of observed stars with the catalogue stars. But, on-orbit star identification is difficult as it utilizes the values of camera calibrating parameters that can change in orbit(detector and optical element alignment get change in orbit due to solar pressure or sudden temperature change) from the ground calibrated value. This difficulty is overcome by employing a camera self-calibration technique which only requires four observed stars in three consecutive image frames. Star camera parameters along with lens (radial and decentering) distortion coefficients are determined by camera self calibration technique. Finally Kalman filter is used to refine the estimated data obtained from the LS based method to improve the level of accuracy. We consider the true values of camera parameters as (u0,v0) = (512.75,511.25) pixel, f = 50.5mm; The ground calibrated values of those parameters are (u0,v0) =( 512,512) pixel, f = 50mm; Worst case radial distortion coefficient affecting the star camera lens is considered to be k1 =5 x 10−3 .Our proposed method of attitude determination achieves accuracy of the order of magnitude around 6.2288 x 10−5 rad,3.3712 x 10−5 radand5.8205 x 10−5 rad in attitude angles φ,θ and ψ. Attitude estimation by existing methods in the literature diverges from the true value since they utilize the ground calibrated values of camera parameters instead of true values. To summarize, we developed a formal theory of discrete time Second Order Sliding Mode Observer for uncertain multi-output system. Our methods achieve the desired accuracy while estimating satellite attitude and attitude rate using low fidelity star sensor data. Our methods require lower on-board processing requirement and less memory allocation; thus are suitable for micro-satellite applications. Thus, the objective of using low fidelity star sensor as stand-alone sensor in micro-satellite application is achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie