Dissertationen zum Thema „Système de multi-Camera“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-46 Dissertationen für die Forschung zum Thema "Système de multi-Camera" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Mennillo, Laurent. „Reconstruction 3D de l'environnement dynamique d'un véhicule à l'aide d'un système multi-caméras hétérogène en stéréo wide-baseline“. Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC022/document.
Der volle Inhalt der QuelleThis Ph.D. thesis, which has been carried out in the automotive industry in association with Renault Group, mainly focuses on the development of advanced driver-assistance systems and autonomous vehicles. The progress made by the scientific community during the last decades in the fields of computer science and robotics has been so important that it now enables the implementation of complex embedded systems in vehicles. These systems, primarily designed to provide assistance in simple driving scenarios and emergencies, now aim to offer fully autonomous transport. Multibody SLAM methods currently used in autonomous vehicles often rely on high-performance and expensive onboard sensors such as LIDAR systems. On the other hand, digital video cameras are much cheaper, which has led to their increased use in newer vehicles to provide driving assistance functions, such as parking assistance or emergency braking. Furthermore, this relatively common implementation now allows to consider their use in order to reconstruct the dynamic environment surrounding a vehicle in three dimensions. From a scientific point of view, existing multibody visual SLAM techniques can be divided into two categories of methods. The first and oldest category concerns stereo methods, which use several cameras with overlapping fields of view in order to reconstruct the observed dynamic scene. Most of these methods use identical stereo pairs in short baseline, which allows for the dense matching of feature points to estimate disparity maps that are then used to compute the motions of the scene. The other category concerns monocular methods, which only use one camera during the reconstruction process, meaning that they have to compensate for the ego-motion of the acquisition system in order to estimate the motion of other objects. These methods are more difficult in that they have to address several additional problems, such as motion segmentation, which consists in clustering the initial data into separate subspaces representing the individual movement of each object, but also the problem of the relative scale estimation of these objects before their aggregation within the static scene. The industrial motive for this work lies in the use of existing multi-camera systems already present in actual vehicles to perform dynamic scene reconstruction. These systems, being mostly composed of a front camera accompanied by several surround fisheye cameras in wide-baseline stereo, has led to the development of a multibody reconstruction method dedicated to such heterogeneous systems. The proposed method is incremental and allows for the reconstruction of sparse mobile points as well as their trajectory using several geometric constraints. Finally, a quantitative and qualitative evaluation conducted on two separate datasets, one of which was developed during this thesis in order to present characteristics similar to existing multi-camera systems, is provided
Petit, Benjamin. „Téléprésence, immersion et interactions pour le reconstruction 3D temps-réel“. Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00584001.
Der volle Inhalt der QuelleKim, Jae-Hak, und Jae-Hak Kim@anu edu au. „Camera Motion Estimation for Multi-Camera Systems“. The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.
Der volle Inhalt der QuelleKim, Jae-Hak. „Camera motion estimation for multi-camera systems /“. View thesis entry in Australian Digital Theses Program, 2008. http://thesis.anu.edu.au/public/adt-ANU20081211.011120/index.html.
Der volle Inhalt der QuelleJiang, Xiaoyan [Verfasser]. „Multi-Object Tracking-by-Detection Using Multi-Camera Systems / Xiaoyan Jiang“. München : Verlag Dr. Hut, 2016. http://d-nb.info/1084385325/34.
Der volle Inhalt der QuelleKrucki, Kevin C. „Person Re-identification in Multi-Camera Surveillance Systems“. University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1448997579.
Der volle Inhalt der QuelleHammarlund, Emil. „Target-less and targeted multi-camera color calibration“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33876.
Der volle Inhalt der QuelleÅkesson, Ulrik. „Design of a multi-camera system for object identification, localisation, and visual servoing“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44082.
Der volle Inhalt der QuelleTuresson, Eric. „Multi-camera Computer Vision for Object Tracking: A comparative study“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21810.
Der volle Inhalt der QuelleNadella, Suman. „Multi camera stereo and tracking patient motion for SPECT scanning systems“. Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082905-161037/.
Der volle Inhalt der QuelleKeywords: Feature matching in multiple cameras; Multi camera stereo computation; Patient Motion Tracking; SPECT Imaging Includes bibliographical references. (p.84-88)
Knorr, Moritz [Verfasser]. „Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing / Moritz Knorr“. Karlsruhe : KIT Scientific Publishing, 2018. http://www.ksp.kit.edu.
Der volle Inhalt der QuelleSankaranarayanan, Aswin C. „Robust and efficient inference of scene and object motion in multi-camera systems“. College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9855.
Der volle Inhalt der QuelleThesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Bachnak, Rafic A. „Development of a stereo-based multi-camera system for 3-D vision“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1172005477.
Der volle Inhalt der QuelleKnorr, Moritz [Verfasser], und C. [Akademischer Betreuer] Stiller. „Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing / Moritz Knorr ; Betreuer: C. Stiller“. Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1154856798/34.
Der volle Inhalt der QuelleEsquivel, Sandro [Verfasser]. „Eye-to-Eye Calibration - Extrinsic Calibration of Multi-Camera Systems Using Hand-Eye Calibration Methods / Sandro Esquivel“. Kiel : Universitätsbibliothek Kiel, 2015. http://d-nb.info/1073150615/34.
Der volle Inhalt der QuelleLamprecht, Bernhard. „A testbed for vision based advanced driver assistance systems with special emphasis on multi-camera calibration and depth perception /“. Aachen : Shaker, 2008. http://d-nb.info/990314847/04.
Der volle Inhalt der QuelleMichieletto, Giulia. „Multi-Agent Systems in Smart Environments - from sensor networks to aerial platform formations“. Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427273.
Der volle Inhalt der QuelleNell’ultimo ventennio, i progressi nel campo della computazione pervasiva e dell’intelligenza ambientale hanno portato ad un rapido sviluppo di ambienti smart, dove più sistemi cyber-fisici sono chiamati ad interagire al fine di migliorare la vita umana. L’efficacia di un ambiente smart si basa pertanto sulla collaborazione di diverse entità vincolate a fornire prestazioni di alto livello in tempo reale. In quest’ottica, il ruolo dei sistemi multi-agente è evidente grazie alla capacità di queste architetture, che coinvolgono gruppi di dispositivi capaci di interagire tra loro, di risolvere compiti complessi sfruttando calcoli e comunicazioni locali. Sebbene tutti i sistemi multi-agenti si caratterizzino per scalabilità, robustezza e autonomia, queste architetture possono essere distinte in base alle proprietà degli elementi che le compongono. In questa tesi si considerano tre tipi di sistemi multi-agenti e per ciascuno di questi sono proposte soluzioni distribuite e innovative volte a risolvere problemi tipici per gli ambienti smart. Reti di Sensori Wireless - La prima parte della tesi è incentrata sullo sviluppo di efficaci strategie di clustering per le reti di sensori wireless impiegate in ambito industriale. Tenendo conto sia dei dati acquisiti che della topologia di rete, sono proposti due algoritmi (uno centralizzato e uno distribuito) volti a raggruppare i nodi in clusters locali non sovrapposti per migliorare le capacità di auto-organizzazione del sistema. Sistemi Multi-Camera - La seconda parte della tesi affronta il problema di videosorveglianza nel contesto di reti di sensori visivi intelligenti. In primo luogo, è considerata la stima di assetto che prevede la ricostruzione dell’orientamento di ogni agente appartenente al sistema rispetto ad un sistema globale inerziale. In seguito, è affrontato il problema di pattugliamento perimetrale, secondo il quale i confini di una certa area devono essere ripetutamente monitorati da un insieme di videocamere. Entrambe le problematiche sono trattate nell’ambito dell’ottimizzazione distribuita e risolte attraverso la minimizzazione iterativa di un’adeguata funzione costo. Formazioni di Piattaforme Aeree - La terza parte della tesi è dedicata alle piattaforme aeree autonome. Concentrandosi sul singolo veicolo, sono valutate due proprietà, ovvero la capacità di controllare indipendentemente la posizione e l’assetto e la robustezza rispetto alla perdita di un motore. Sono quindi descritti due controllori non lineari che mirano a mantenere una data piattaforma in hovering statico in posizione fissa con orien- tamento costante. Infine, l’attenzione è volta agli stormi di piattaforme aeree, studiando sia la stabilizzazione di una determinata formazione che il controllo del movimento lungo direzioni prefissate. A tal fine viene studiata la teoria della bearing rigidità per i sistemi che evolvono nello spazio speciale euclideo tri-dimensionale. La tesi evolve dunque dallo studio di sistemi multi-agenti fissi a totalmente attuati usati in applicazioni per ambienti smart in cui il numero di gradi di libertà da gestire è incrementale.
Perrot, Clément. „Imagerie directe de systèmes planétaires avec SPHERE et prédiction des performances de MICADO sur l’E-ELT“. Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC212/document.
Der volle Inhalt der QuelleThis thesis is performed in the context of the study of the formation and evolution of planetary systems using high contrast imaging, also known as direct imaging in contrast to so-called "indirect" detection methods. The work I present in this manuscript is divided into two distinct parts.The first part concerns the observational component of my thesis, using the SPHERE instrument installed at Very LargeTelescope. This work was done as part of the consortium of the same name. The purpose of the SPHERE instrument is to detect and characterize young and massive exoplanets, but also circumstellar disks ranging from very young protoplanetary disks to older debris disks. In this manuscript, I present my contribution to the program SHINE, a large survey with an integration time of 200 nights' worth of observation, the goal of which is the detection of new exoplanets and the spectral and orbital characterization of some previously-known companions. I also present the two studies of circumstellar disks that I made, around the stars HD 141569 and HIP 86598. The first study allowed the discovery of concentric rings at about ten AU of the star along with an unusual flux asymmetry in the disk. The second study is about the discovery of a debris disk that also has an unusual flux asymmetry. The second part concerns the instrumental component of my thesis work done within the MICADO consortium, in charge of the design of the camera of the same name which will be one of the first light instruments of the European Extremely Large Telescope (ELT). In this manuscript, I present the study in which I define the design of some components of the coronagraphic mode of MICADO while taking into account the constraints of the instrument - which is not dedicated to high contrast imaging, unlike SPHERE
Lamprecht, Bernhard [Verfasser]. „A Testbed for Vision-based Advanced Driver Assistance Systems with Special Emphasis on Multi-Camera Calibration and Depth Perception / Bernhard Lamprecht“. Aachen : Shaker, 2008. http://d-nb.info/1161303995/34.
Der volle Inhalt der QuelleEsparza, García José Domingo [Verfasser], und Bernd [Akademischer Betreuer] Jähne. „3D Reconstruction for Optimal Representation of Surroundings in Automotive HMIs, Based on Fisheye Multi-Camera Systems / José Domingo Esparza García ; Betreuer: Bernd Jähne“. Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180501810/34.
Der volle Inhalt der QuelleŠolony, Marek. „Lokalizace objektů v prostoru“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236626.
Der volle Inhalt der QuelleMacknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.
Der volle Inhalt der QuelleAuvinet, Edouard. „Analyse d’information tridimensionnelle issue de systèmes multi-caméras pour la détection de la chute et l’analyse de la marche“. Thèse, Rennes 2, 2012. http://hdl.handle.net/1866/9770.
Der volle Inhalt der QuelleThis thesis is concerned with defining new clinical investigation method to assess the impact of ageing on motricity. In particular, this thesis focuses on two main possible disturbance during ageing : the fall and walk impairment. This two motricity disturbances still remain unclear and their clinical analysis presents real scientist and technological challenges. In this thesis, we propose novel measuring methods usable in everyday life or in the walking clinic, with a minimum of technical constraints. In the first part, we address the problem of fall detection at home, which was widely discussed in previous years. In particular, we propose an approach to exploit the subject’s volume, reconstructed from multiple calibrated cameras. These methods are generally very sensitive to occlusions that inevitably occur in the home and we therefore propose an original approach much more robust to these occultations. The efficiency and real-time operation has been validated on more than two dozen videos of falls and lures, with results approaching 100 % sensitivity and specificity with at least four or more cameras. In the second part, we go a little further in the exploitation of reconstructed volumes of a person at a particular motor task : the treadmill, in a clinical diagnostic. In this section we analyze more specifically the quality of walking. For this we develop the concept of using depth camera for the quantification of the spatial and temporal asymmetry of lower limb movement during walking. After detecting each step in time, this method makes a comparison of surfaces of each leg with its corresponding symmetric leg in the opposite step. The validation performed on a cohort of 20 subjects showed the viability of the approach.
Réalisé en cotutelle avec le laboratoire M2S de Rennes 2
Mhiri, Rawia. „Approches 2D/2D pour le SFM à partir d'un réseau de caméras asynchrones“. Thesis, Rouen, INSA, 2015. http://www.theses.fr/2015ISAM0014/document.
Der volle Inhalt der QuelleDriver assistance systems and autonomous vehicles have reached a certain maturity in recent years through the use of advanced technologies. A fundamental step for these systems is the motion and the structure estimation (Structure From Motion) that accomplish several tasks, including the detection of obstacles and road marking, localisation and mapping. To estimate their movements, such systems use relatively expensive sensors. In order to market such systems on a large scale, it is necessary to develop applications with low cost devices. In this context, vision systems is a good alternative. A new method based on 2D/2D approaches from an asynchronous multi-camera network is presented to obtain the motion and the 3D structure at the absolute scale, focusing on estimating the scale factors. The proposed method, called Triangle Method, is based on the use of three images forming a. triangle shape: two images from the same camera and an image from a neighboring camera. The algorithrn has three assumptions: the cameras share common fields of view (two by two), the path between two consecutive images from a single camera is approximated by a line segment, and the cameras are calibrated. The extrinsic calibration between two cameras combined with the assumption of rectilinear motion of the system allows to estimate the absolute scale factors. The proposed method is accurate and robust for straight trajectories and present satisfactory results for curve trajectories. To refine the initial estimation, some en-ors due to the inaccuracies of the scale estimation are improved by an optimization method: a local bundle adjustment applied only on the absolute scale factors and the 3D points. The presented approach is validated on sequences of real road scenes, and evaluated with respect to the ground truth obtained through a differential GPS. Finally, another fundamental application in the fields of driver assistance and automated driving is road and obstacles detection. A method is presented for an asynchronous system based on sparse disparity maps
Castanheiro, Letícia Ferrari. „Geometric model of a dual-fisheye system composed of hyper-hemispherical lenses /“. Presidente Prudente, 2020. http://hdl.handle.net/11449/192117.
Der volle Inhalt der QuelleResumo: A combinação de duas lentes com FOV hiper-hemisférico em posição opostas pode gerar um sistema omnidirecional (FOV 360°) leve, compacto e de baixo custo, como Ricoh Theta S e GoPro Fusion. Entretanto, apenas algumas técnicas e modelos matemáticos para a calibração um sistema com duas lentes hiper-hemisféricas são apresentadas na literatura. Nesta pesquisa, é avaliado e definido um modelo geométrico para calibração de sistemas omnidirecionais compostos por duas lentes hiper-hemisféricas e apresenta-se algumas aplicações com esse tipo de sistema. A calibração das câmaras foi realizada no programa CMC (calibração de múltiplas câmeras) utilizando imagens obtidas a partir de vídeos feitos com a câmara Ricoh Theta S no campo de calibração 360°. A câmara Ricoh Theta S é composto por duas lentes hiper-hemisféricas fisheye que cobrem 190° cada uma. Com o objetivo de avaliar as melhorias na utilização de pontos em comum entre as imagens, dois conjuntos de dados de pontos foram considerados: (1) apenas pontos no campo hemisférico, e (2) pontos em todo o campo de imagem (isto é, adicionar pontos no campo de imagem hiper-hemisférica). Primeiramente, os modelos ângulo equisólido, equidistante, estereográfico e ortogonal combinados com o modelo de distorção Conrady-Brown foram testados para a calibração de um sensor da câmara Ricoh Theta S. Os modelos de ângulo-equisólido e estereográfico apresentaram resultados melhores do que os outros modelos. Portanto, esses dois modelos de projeção for... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The arrangement of two hyper-hemispherical fisheye lenses in opposite position can design a light weight, small and low-cost omnidirectional system (360° FOV), e.g. Ricoh Theta S and GoPro Fusion. However, only a few techniques are presented in the literature to calibrate a dual-fisheye system. In this research, a geometric model for dual-fisheye system calibration was evaluated, and some applications with this type of system are presented. The calibrating bundle adjustment was performed in CMC (calibration of multiple cameras) software by using the Ricoh Theta video frames of the 360° calibration field. The Ricoh Theta S system is composed of two hyper-hemispherical fisheye lenses with 190° FOV each one. In order to evaluate the improvement in applying points in the hyper-hemispherical image field, two data set of points were considered: (1) observations that are only in the hemispherical field, and (2) points in all image field, i.e. adding points in the hyper-hemispherical image field. First, one sensor of the Ricoh Theta S system was calibrated in a bundle adjustment based on the equidistant, equisolid-angle, stereographic and orthogonal models combined with Conrady-Brown distortion model. Results showed that the equisolid-angle and stereographic models can provide better solutions than those of the others projection models. Therefore, these two projection models were implemented in a simultaneous camera calibration, in which the both Ricoh Theta sensors were considered i... (Complete abstract click electronic access below)
Mestre
Howard, Shaun Michael. „Deep Learning for Sensor Fusion“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1495751146601099.
Der volle Inhalt der QuelleVestin, Albin, und Gustav Strandberg. „Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms“. Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.
Der volle Inhalt der QuelleBélanger, Lucie. „Calibration de systèmes de caméras et projecteurs dans des applications de création multimédia“. Thèse, 2009. http://hdl.handle.net/1866/3864.
Der volle Inhalt der QuelleThis thesis focuses on computer vision applications for technological art projects. Camera and projector calibration is discussed in the context of tracking applications and 3D reconstruction in visual arts and performance art. The thesis is based on two collaborations with québécois artists Daniel Danis and Nicolas Reeves. Projective geometry and classical camera calibration techniques, such as planar calibration and calibration from epipolar geometry, are detailed to introduce the techniques implemented in both artistic projects. The project realized in collaboration with Nicolas Reeves consists of calibrating a pan-tilt camera-projector system in order to adapt videos to be projected in real time on mobile cubic screens. To fulfil the project, we used classical camera calibration techniques combined with our proposed camera pose calibration technique for pan-tilt systems. This technique uses elliptic planes, generated by the observation of a point in the scene while the camera is panning, to compute the camera pose in relation to the rotation centre of the pan-tilt system. The project developed in collaboration with Daniel Danis is based on multi-camera calibration. For this studio theatre project, we developed a multi-camera calibration algorithm to be used with a wiimote network. The technique based on epipolar geometry allows 3D reconstruction of a trajectory in a large environment at a low cost. The results obtained from the camera calibration techniques implemented are presented alongside their application in real public performance contexts.
Kim, Jae-Hak. „Camera Motion Estimation for Multi-Camera Systems“. Phd thesis, 2008. http://hdl.handle.net/1885/49364.
Der volle Inhalt der QuelleChen, Guan-Ting, und 陳冠廷. „Bandwidth Expansion in Camera Communication Systems with Multi-camera Receiver“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/73241495685978357225.
Der volle Inhalt der Quelle國立臺灣大學
資訊工程學研究所
103
This thesis proposed a system that improves the throughput of a camera- based visible light communication (VLC) system by using two or more Com- plementary Mental-Oxide-Semiconductor (CMOS) rolling shutter cameras instead of one. VLC is a new data transmission technology which uses the optical signal to transmit digital information by controlling the LED’s blink- ing frequency. In a single-camera system, the highest usable frequency of the transmitted signal is limited by the Nyquist rate, determined by the read-out duration of the rolling shutter mechanism. In this work, we lift this limitation by using two or more cameras, enabling the use of signal frequency higher than the Nyquist rate. This allows us to use a larger number of frequencies, i.e., a higher modulation order, and improves the system throughput. Poten- tially, the developed technique can be used in advanced driver assistance sys- tem (ADAS), indoor positioning, and augmented reality systems using VLC. In our proposed system, we use rolling shutter cameras with different image sensors as the receiver. Due to the different capture rates of the cameras, the highest frequency which camera can determine is also different. When the blinking frequency exceeds the maximum value that Nyquist frequency, it will be misjudged as a lower frequency by camera. In this thesis, we propose a scheme to obtain the correct value at high frequency by different misjudged low-frequency values of each camera. To evaluate the feasibility of this scheme, we use software- defined radio (SDR) to implement the transmitter and off-the-shelf experi- mental cameras as the receiver. We expect this technology can be widely used for a wide range of applications in the future. We believe that this tech- nology can be generalized and used for a wider range of applications in the future.
Chen, Chung Hao. „Automated Surveillance Systems with Multi-Camera and Robotic Platforms“. 2009. http://trace.tennessee.edu/utk_graddiss/20.
Der volle Inhalt der QuelleChe-YungHung und 洪哲詠. „Camera-assisted Calibration Techniques for Merging Multi-projector Systems“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/97628708470738795905.
Der volle Inhalt der QuelleChen, Wei-Jen, und 陳威任. „Video Recording Scheduling Algorithms for Real Time Multi-Camera Surveillance Systems“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/27932006256583392594.
Der volle Inhalt der Quelle輔仁大學
電機工程學系碩士班
100
In a multi-channel video surveillance system, due to the limit of storage capacity and processing speed not all but only a subset of frames from the multiple channels can be recorded. Since each channel may have different recording frame rate. It is required that the recorded frames from the same channel should have equal temporal distance between any two consecutive frames. We define two cost functions to evaluate the scheduling quality. The first one is to minimize the summation of distance jitter of all channels as well as meet real time requirement. The second one is to minimize the same cost while the distance jitter for any channel must not be greater than a given bound. These two problems are formulated as a zero-one integer linear programming problem. For resource-constrained embedded systems, we propose several different scheduling algorithms to obtain solutions efficiently. The proposed algorithms were implemented in C language. Experimental results show the comparison of scheduling quality among different algorithms.
Schacter, David. „Multi-Camera Active-vision System Reconfiguration for Deformable Object Motion Capture“. Thesis, 2014. http://hdl.handle.net/1807/44060.
Der volle Inhalt der QuelleRen, You-Lin, und 任宥霖. „The Integration of Coordinate Systems from Multi-View Camera Groups for Shape-From-Silhouette Technique“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/r55jgt.
Der volle Inhalt der Quelle國立中央大學
機械工程學系
106
This study develops a process of the integration of coordinate systems from multi-view camera groups for shape-from-silhouette (SFS) technique. The popular 3D modeling technique which based on the SFS method usually through the rotatory table to obtain geometry and color information of object. However, the rotatory table only rotate in one axis, and it causes that the object has the limitation of the shooting angle especially at the top/bottom view. In SFS method, this limitation leads the artifacts of 3D model generated at the top/bottom. If the object can tip over, reposition on the rotatory table, and retake the images, the missing information of 3D model from top/bottom view could be replenished. In order to integrate the entire silhouette data taken from different views into a single coordinate system, this study develops an alignment by image matching (AIM) algorithm to establish the spatial distribution of all camera positions. In this algorithm, the silhouette data obtained in tipped positions is setting as targets. The 3D model transforms into a predicted positon to simulate one of tipped positions and projects the shape onto the imaging plane of the camera to obtain the predicted silhouette data as a subject. Then, this subject silhouette data will make the comparison with corresponding target. The AIM algorithm used to minimize the difference between these two data and calculate the corresponding translation and rotation of the subject needed to adjust in 3D space. When the sum of differences in all tipped positions is minimum, all camera position (in auxiliary views) can integrate into a coordinate system of primary view. A complete 3D model can be rebuilt by the SFS method with all silhouette data in all views. At last, this study will demonstrate three examples which were rebuilt by the development of process of the integration of coordinate systems from multi-view camera groups for shapefrom- silhouette technique to verify our proposed process.
Lu, Ming-Kun, und 呂鳴崑. „Multi-Camera Vision-based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Human Computer Interactive Systems“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/vgss3p.
Der volle Inhalt der Quelle國立臺北科技大學
資訊工程系研究所
100
Nowadays, multi-touch technology has become a popular issue. Multi-touch has been implemented in several ways including resistive type, capacitive type and so on. However, because of limitations, multi-touch by these implementations cannot support large screens. Therefore, this thesis proposes a multi-camera vision-based finger detection, tracking, and event identification techniques for multi-touch sensing with implementation. The proposed system detects the multi-finger pressing on an acrylic board by capturing the infrared light through four infrared cameras. The captured infrared points, which are equivalent to the multi-finger touched points, can be used for input equipments and supply man-computer interface with convenience. Additionally, the proposed system is a multi-touch sensing with computer vision technology. Compared with the conventional touch technology, multi-touch technology allows users to input complex commands. The proposed multi-touch point detection algorithm identifies the multi-finger touched points by using the bright object segmentation techniques. The extracted bright objects are then tracked, and the trajectories of objects are recorded. Furthermore, the system will analyze the trajectories of objects and identify the corresponding events pre-defined in the proposed system. For applications, this thesis wants to provide a simple human-computer interface with easy operation. Users can access and input commands by touch and move fingers. Besides, the proposed system is implemented with a table-sized screen, which can support multi-user interaction.
Betrabet, Siddhant S. „Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3d Lidar and Multi-Camera Setup“. Thesis, 2020. http://hdl.handle.net/1805/24776.
Der volle Inhalt der QuelleAnalyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate the movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects. These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy. The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.
(9708467), Siddhant Srinath Betrabet. „Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3D LIDAR and Multi-Camera Setup“. Thesis, 2021.
Den vollen Inhalt der Quelle findenAnalyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects.
These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy.
The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.
Parnian, Neda. „Integration of Local Positioning System & Strapdown Inertial Navigation System for Hand-Held Tool Tracking“. Thesis, 2008. http://hdl.handle.net/10012/4043.
Der volle Inhalt der Quelle„The Effects of a Multi-View Camera System on Spatial Cognition, Cognitive Workload and Performance in a Minimally Invasive Surgery Task“. Master's thesis, 2019. http://hdl.handle.net/2286/R.I.53914.
Der volle Inhalt der QuelleDissertation/Thesis
Masters Thesis Human Systems Engineering 2019
(8781872), Yaan Zhang. „Improvement of Structured Light Systems Using Computer Vision Techniques“. Thesis, 2020.
Den vollen Inhalt der Quelle findenIn this thesis work, we propose computer vision techniques for 3D reconstruction and object height measurement using a single camera and multi-laser emitters, which have an intersection on the projected image plane. Time-division and color division methods are first investigated for our structured light system. Although the color division method offers better accuracy for object height measurement, it requires the laser emitters equipped with different color lights. Furthermore, the color division method is sensitive to light exposure in the measurement environment. Next, a new multi-level random sample consensus (MLRANSAC) algorithm has been developed. The proposed MLRANSAC method not only offers high accuracy for object height measurement but also eliminates the requirement for the laser emitters with different colors. Our experiment results have validated the effectiveness of the MLRANSAC algorithm.
Mauricio, Emanuel Adelino Ferreira. „Localização de capsula endoscópica utilizando informação visual“. Master's thesis, 2018. http://hdl.handle.net/10316/86666.
Der volle Inhalt der QuelleA utilidade das imagens capturadas num exame de endoscopia por cápsula endoscópica depende não só da informação contida na imagem como também da capacidade de localizar a mesma imagem no interior do sistema digestivo. É objectivo desta dissertação estimar a pose relativa de uma cápsula endoscópica multi-câmara panorâmica a partir das imagens utilizando métodos geométricos. A cápsula utilizada foi a CapsoCam SV2 da CapsoVision que possui quatro câmaras dispostas radialmente. O modelo da câmara generalizada e a restrição epipolar generalizada (REG)foram utilizados para estimar a pose relativa entre cada frame. Foi utilizado o algoritmo dos 17 pontos, a solução iterativa de Kneip e o algoritmo dos 17 pontos RANSAC para solucionar a REG. A integração das poses relativas permite criar um modelo do 3d do sistema digestivo onde cada fotografia pode ser localizada acrescentando uma dimensão extra ao exame médico.Uma parte substancial do trabalho foi dedicada ao desenvolvimento de dois simuladores distintos de um sistema multi-câmara análogo ao utilizado pela cápsula utilizada. O primeiro simulador pretende simular a projecção de pontos 3d definidos num sistema de coordenadas global no plano de imagem normalizado, desta forma a imagem consiste em pontos 2d em que a correspondência é conhecida à priori e não é necessário trabalhar com imagens RGB. A segunda simulação gera imagens RGB utilizando as propriedades do MATLAB de modo a aproximar os dados gerados com os dados reais obtidos com a cápsula.A configuração do sistema visual dos simuladores pode ser alterada facilmente. É possível adicionar ou remover câmaras e modificar os parâmetros intrínsecos e extrínsecos de cada câmara.Foi feita uma analise detalhada das imagens capturadas pela cápsula CapsoCam SV2 no que diz respeito à densidade de pontos de interesse (features), qualidade das correspondências entre features, redundâncias na imagem, continuidade de numero de features. Foram realizados realizados vários testes de laboratório fotografando vários padrões de xadrez com dimensões conhecidas com o objectivo de calibrar cada câmara separadamente. Os parâmetros intrínsecos de cada câmara foram estimados com o resultados pouco positivos. As poses relativas estimadas obtiveram erros de projecção muito grandes o que impossibilitou o cumprimento do objectivo. Contudo a analise realizada às imagens e os simuladores criados contribuíram muito para que num futuro próximo seja possível estimar a pose com uma precisão aceitável.
The information extracted from images captured by an endoscopic capsule is as useful as the capability to locate the portion of intestine being imaged. The relative pose estimation of a multi-camera endoscopic capsule with a 360 degree panoramic field of view, using the captured images is the objective of this dissertation. We used the CapsoCam SV2 from CapsoVision as the object of study. Using the generalized camera model and the generalized epipolarconstraint (GER) to determine the relative motion of the capsule between each frame. To compute a solution to the GER we used the 17 point algorithm, the Kneip solution and the 17 point RANSAC algorithm from the openGV library. The integration of the successive relative motions and the 3d reconstruction gives us a 3d model of the gastrointestinal track and the path taken by the device. This way we can determine where each photo as been taken adding an extra dimension to the medical exam. A substantial part of this work was dedicated to the development of two different simulations of a multi-camera system analogous to the CapsoCam SV2. This first simulation works as a simple projection of 3d points, defined in a global coordinate system, projected to the normalized image plane of each camera. This way the simulated image consists of 2d points where the correspondence between frames is known and we can apply the relative pose estimation algorithms directly. The second simulation generates RGB images using the full MATLABprotective space capabilities approximating the simulated and real data. The configuration of the visual system can easily be changed in both simulators. It is possible to change the intrinsic and extrinsic parameters of each camera and the number of cameras of the system. A detailed analysis to the images taken by the CapsoCam SV2 was also in scope of this dissertation, mainly the feature density, feature matching quality, image redundancy and correspondence continuity. Several tests were undertaken to calibrate each individual camera by taking photographs of checkerboard patterns with known dimensions. Each camera intrinsic parameters were calibrated successfully. Still, the estimated relative poses resulted in huge reprojectionerrors which means we did not meet the objective. The image analysis and the simulators provide a great contribution to the advancement towards the objective.
(6843914), Radhika Ravi. „Interactive Environment For The Calibration And Visualization Of Multi-sensor Mobile Mapping Systems“. Thesis, 2019.
Den vollen Inhalt der Quelle findenRizwan, Macknojia. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, 2013. http://hdl.handle.net/10393/23976.
Der volle Inhalt der QuellePal, Madhumita. „Accurate and Efficient Algorithms for Star Sensor Based Micro-Satellite Attitude and Attitude Rate Estimation“. Thesis, 2013. http://etd.iisc.ac.in/handle/2005/3428.
Der volle Inhalt der QuellePal, Madhumita. „Accurate and Efficient Algorithms for Star Sensor Based Micro-Satellite Attitude and Attitude Rate Estimation“. Thesis, 2013. http://etd.iisc.ernet.in/2005/3428.
Der volle Inhalt der Quelle