Dissertations / Theses on the topic 'Multi-Camera System'

To see the other types of publications on this topic, follow the link: Multi-Camera System.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multi-Camera System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Vibeck, Alexander. "Synchronization of a Multi Camera System." Thesis, Linköpings universitet, Datorseende, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119408.

Full text
Abstract:
In a synchronized multi camera system it is imperative that the synchronization error between the different cameras is as close to zero as possible and the jitter of the presumed frame rate is as small as possible. It is even more important when these systems are used in an autonomous vehicle trying to sense its surroundings. We would never hand over the control to a autonomous vehicle if we couldn't trust the data it is using for moving around. The purpose of this thesis was to build a synchronization setup for a multi camera system using state of the art RayTrix digital cameras that will be used in the iQMatic project involving autonomous heavy duty vehicles. The iQMatic project is a collaboration between several Swedish industrial partners and universities. There was also software development for the multi camera system involved. Different synchronization techniques were implemented and then analysed against the system requirements. The two techniques were hardware trigger i.e. external trigger using a microcontroller, and software trigger using the API from the digital cameras. Experiments were conducted by testing the different trigger modes with the developed multi camera software. The conclusions show that the hardware trigger is preferable in this particular system by showing more stability and better statistics against the system requirements than the software trigger. But the thesis also show that additional experiments are needed for a more accurate analysis.
iQMatic
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Jae-Hak, and Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
3

Mortensen, Daniel T. "Foreground Removal in a Multi-Camera System." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7669.

Full text
Abstract:
Traditionally, whiteboards have been used to brainstorm, teach, and convey ideas with others. However distributing whiteboard content remotely can be challenging. To solve this problem, A multi-camera system was developed which can be scaled to broadcast an arbitrarily large writing surface while removing objects not related to the whiteboard content. Related research has been performed previously to combine multiple images together, identify and remove unrelated objects, also referred to as foreground, in a single image and correct for warping differences in camera frames. However, this is the first time anyone has attempted to solve this problem using a multi-camera system. The main components of this problem include stitching the input images together, identifying foreground material, and replacing the foreground information with the most recent background (desired) information. This problem can be subdivided into two main components: fusing multiple images into one cohesive frame, and detecting/removing foreground objects. for the first component, homographic transformations are used to create a mathematical mapping from the input image to the desired reference frame. Blending techniques are then applied to remove artifacts that remain after the perspective transform. For the second, statistical tests and modeling in conjunction with additional classification algorithms were used.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Han, and 周晗. "Intelligent video surveillance in a calibrated multi-camera system." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45989217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Åkesson, Ulrik. "Design of a multi-camera system for object identification, localisation, and visual servoing." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44082.

Full text
Abstract:
In this thesis, the development of a stereo camera system for an intelligent tool is presented. The task of the system is to identify and localise objects so that the tool can guide a robot. Different approaches to object detection have been implemented and evaluated and the systems ability to localise objects has been tested. The results show that the system can achieve a localisation accuracy below 5 mm.
APA, Harvard, Vancouver, ISO, and other styles
6

Turesson, Eric. "Multi-camera Computer Vision for Object Tracking: A comparative study." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21810.

Full text
Abstract:
Background: Video surveillance is a growing area where it can help with deterring crime, support investigation or to help gather statistics. These are just some areas where video surveillance can aid society. However, there is an improvement that could increase the efficiency of video surveillance by introducing tracking. More specifically, tracking between cameras in a network. Automating this process could reduce the need for humans to monitor and review since the tracking can track and inform the relevant people on its own. This has a wide array of usability areas, such as forensic investigation, crime alerting, or tracking down people who have disappeared. Objectives: What we want to investigate is the common setup of real-time multi-target multi-camera tracking (MTMCT) systems. Next up, we want to investigate how the components in an MTMCT system affect each other and the complete system. Lastly, we want to see how image enhancement can affect the MTMCT. Methods: To achieve our objectives, we have conducted a systematic literature review to gather information. Using the information, we implemented an MTMCT system where we evaluated the components to see how they interact in the complete system. Lastly, we implemented two image enhancement techniques to see how they affect the MTMCT. Results: As we have discovered, most often, MTMCT is constructed using a detection for discovering object, tracking to keep track of the objects in a single camera and a re-identification method to ensure that objects across cameras have the same ID. The different components have quite a considerable effect on each other where they can sabotage and improve each other. An example could be that the quality of the bounding boxes affect the data which re-identification can extract. We discovered that the image enhancement we used did not introduce any significant improvement. Conclusions: The most common structure for MTMCT are detection, tracking and re-identification. From our finding, we can see that all the component affect each other, but re-identification is the one that is mostly affected by the other components and the image enhancement. The two tested image enhancement techniques could not introduce enough improvement, but other image enhancement could be used to make the MTMCT perform better. The MTMCT system we constructed did not manage to reach real-time.
APA, Harvard, Vancouver, ISO, and other styles
7

Bachnak, Rafic A. "Development of a stereo-based multi-camera system for 3-D vision." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1172005477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Becklinger, Nicole Lynn. "Design and test of a multi-camera based orthorectified airborne imaging system." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/461.

Full text
Abstract:
Airborne imaging platforms have been applied to such diverse areas as surveillance, natural disaster monitoring, cartography and environmental research. However, airborne imaging data can be expensive, out of date, or difficult to interpret. This work introduces an Orthorectified Airborne Imaging (OAI) system designed to provide near real time images in Google Earth. The OAI system consists of a six camera airborne image collection system and a ground based image processing system. Images and position data are transmitted from the air to the ground station using a point to point (PTP) data link antenna connection. Upon reaching the ground station, image processing software combines the six individual images into a larger stitched image. Stitched images are processed to remove distortions and then rotated so that north is pointed up (orthorectified). Because the OAI images are very large, they must be broken down into a series of progressively higher resolution tiles called an image pyramid before being loaded into Google Earth. A KML programming technique called a super overlay is used to load the image pyramid into Google Earth. A program and Graphical User Interface created in C# create the KML super overlay files according to user specifications. Image resolution and the location of the area being imaged relitive to the aircraft are functions of altitude and the position of the imaging cameras. Placement of OAI images in Google Earth allows the user to take advantage of the place markers, street names, and navigation features native to the Google Earth environment.
APA, Harvard, Vancouver, ISO, and other styles
9

Beriault, Silvain. "Multi-camera system design, calibration and three-dimensional reconstruction for markerless motion capture." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27957.

Full text
Abstract:
Recently, significant advances have been made in many sub-areas regarding the problem of markerless human motion capture. However, markerless solutions still tend to introduce major simplifications, especially in early stages of the process, that temper the robustness and the generality of any subsequent modules and, consequently, of the whole application. This thesis concentrates on improving the aspects of multi-camera system design, multi-camera calibration and shape-from-silhouette volumetric reconstruction. In Chapter 3, a thoughtful system analysis is first proposed with the objective of achieving an optimal synchronized multi-camera system. Chapter 4 proposes an easy-to-use multi-camera calibration technique to estimate the relative positioning and orientation of every camera with sub-pixel accuracy. In Chapter 5 a robust shape-from-silhouette algorithm, with precise voxel coloring, is developed. Overall, the proposed framework is successful to reconstruct various 3D human postures and, in particular, complex and self-occlusive pianist postures in real-world (minimally constrained) scenes.
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, de Freitas Rafael Luiz. "MULTI-CAMERA SURVEILLANCE SYSTEM FOR TIME AND MOTION STUDIES OF TIMBER HARVESTING OPERATIONS." UKnowledge, 2019. https://uknowledge.uky.edu/forestry_etds/48.

Full text
Abstract:
Timber harvesting is an important activity in the state of Kentucky; however, there is still a lack of information about the procedure used by the local loggers. The stump to landing transport of logs with skidders is often the most expensive and time-consuming task in timber harvesting operations. This thesis evaluated the feasibility of using a multi-camera system for time and motion studies of timber harvesting operations. It was installed in 5 skidders in 3 different harvesting sites in Kentucky. The time stamped video provided accurate time consumption data for each work phase of the skidders, which was used to fit linear regressions and find the influence of skidding distance, skid-trail gradient, and load size on skidding time. The multi-camera systems were found to be a reliable tool for time and motion studies in timber harvesting sites. Six different time equations and two speed equations were fitted for skidding cycles and sections of skid-trails, for skidders that are both loaded and unloaded. Skid-trail gradient and load size did not have an influence on skidding time. There is a need for future studies of different variables that could affect skidding time and, consequently, cost.
APA, Harvard, Vancouver, ISO, and other styles
11

HONDA, Toshio, Toshiaki FUJII, and Tadahiko HAMAGUCHI. "Real-Time View-Interpolation System for Super Multi-View 3D Display." Institute of Electronics, Information and Communication Engineers, 2003. http://hdl.handle.net/2237/14998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Aykin, Murat Deniz. "Efficient Calibration Of A Multi-camera Measurement System Using A Target With Known Dynamics." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609798/index.pdf.

Full text
Abstract:
Multi camera measurement systems are widely used to extract information about the 3D configuration or &ldquo
state&rdquo
of one or more real world objects. Camera calibration is the process of pre-determining all the remaining optical and geometric parameters of the measurement system which are either static or slowly varying. For a single camera, this consist of the internal parameters of the camera device optics and construction while for a multiple camera system, it also includes the geometric positioning of the individual cameras, namely &ldquo
external&rdquo
parameters. The calibration is a necessary step before any actual state measurements can be made from the system. In this thesis, such a multi-camera state measurement system and in particular the problem of procedurally effective and high performance calibration of such a system is considered. This thesis presents a novel calibration algorithm which uses the known dynamics of a ballistically thrown target object and employs the Extended Kalman Filter (EKF) to calibrate the multi-camera system. The state-space representation of the target state is augmented with the unknown calibration parameters which are assumed to be static or slowly varying with respect to the state. This results in a &ldquo
super-state&rdquo
vector. The EKF algorithm is used to recursively estimate this super-state hence resulting in the estimates of the static camera parameters. It is demonstrated by both simulation studies as well as actual experiments that when the ballistic path of the target is processed by the improved versions of the EKF algorithm, the camera calibration parameter estimates asymptotically converge to their actual values. Since the image frames of the target trajectory can be acquired first and then processed off-line, subsequent improvements of the EKF algorithm include repeated and bidirectional versions where the same calibration images are repeatedly used. Repeated EKF (R-EKF) provides convergence with a limited number of image frames when the initial target state is accurately provided while its bidirectional version (RB-EKF) improves calibration accuracy by also estimating the initial target state. The primary contribution of the approach is that it provides a fast calibration procedure where there is no need for any standard or custom made calibration target plates covering the majority of camera field-of-view. Also, human assistance is minimized since all frame data is processed automatically and assistance is limited to making the target throws. The speed of convergence and accuracy of the results promise a field-applicable calibration procedure.
APA, Harvard, Vancouver, ISO, and other styles
13

Schneider, Johannes [Verfasser]. "Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System / Johannes Schneider." Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1190818558/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Schneider, Johannes [Verfasser]. "Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System / Johannes Schneider." Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1217404635/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sabino, Danilo Damasceno. "Development of a 3D multi-camera measurement system based on image stitching techniques applied for dynamic measurements of large structures." Ilha Solteira, 2018. http://hdl.handle.net/11449/157103.

Full text
Abstract:
Orientador: João Antonio Pereira
Resumo: O objetivo específico deste trabalho é estender as capacidades da técnica de rastreamento de pontos em 3 dimensões (three-dimensional point tracking – 3DPT) para identificar as características dinâmicas de estruturas grandes e complexas, tais como pás de turbina eólica. Um sistema multi-camera (composto de múltiplos sistemas de estéreo visão calibrados independentemente) é desenvolvido para obter alta resolução espacial de pontos discretos a partir de medidas de deslocamento sobre grandes áreas. Uma proposta de técnica de costura é apresentada e empregada para executar o alinhamento de duas nuvens de pontos, obtidas com a técnica 3DPT, de uma estrutura sob excitação dinâmica. Três diferentes algoritmos de registro de nuvens de pontos são propostos para executar a junção das nuvens de pontos de cada sistema estéreo, análise de componentes principais (Principal Component Analysis - PCA), decomposição de valores singulares (Singular value Decomposition - SVD) e ponto mais próximo iterativo (Iterative Closest Point - ICP). Além disso, análise modal operacional em conjunto com o sistema de medição multi-camera e as técnicas de registro de nuvens de pontos são usadas para determinar a viabilidade de usar medidas ópticas (e.g. three-dimensional point tracking – 3DPT) para estimar os parâmetros modais de uma pá de gerador eólico comparando seus resultados com técnicas de medição mais convencionais.
Abstract: The specific objective of this research is to extend the capabilities of three-dimensional (3D) Point Tracking (PT) to identify the dynamic characteristics of large and complex structures, such as utility-scale wind turbine blades. A multi-camera system (composed of multiple independently calibrated stereovision systems) is developed to obtain high spatial resolution of discrete points from displacement measurement over very large areas. A proposal of stitching techniques is presented and employed to perform the alignment of two point clouds, obtained with 3DPT measurement, of a structure under dynamic excitation. The point cloud registration techniques are exploited as a technique for dynamic measuring (displacement) of large structures with high spatial resolution of the model. Three different image registration algorithms are proposed to perform the junction of the points clouds of each stereo system, Principal Component Analysis (PCA), Singular value Decomposition (SVD) and Iterative Closest Point (ICP). Furthermore, operational modal analysis in conjunction with the multi-camera measurement system and registration techniques are used to determine the feasibility of using optical measurements (e.g. three-dimensional point tracking (3DPT)) to estimate the modal parameters of a utility-scale wind turbine blade by comparing with traditional techniques.
Doutor
APA, Harvard, Vancouver, ISO, and other styles
16

Petit, Benjamin. "Téléprésence, immersion et interactions pour le reconstruction 3D temps-réel." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00584001.

Full text
Abstract:
Les environnements 3D immersifs et collaboratifs en ligne sont en pleine émergence. Ils posent les problématiques du sentiment de présence au sein des mondes virtuels, de l'immersion et des capacités d'interaction. Les systèmes 3D multi-caméra permettent, sur la base d'une information photométrique, d'extraire une information géométrique (modèle 3D) de la scène observée. Il est alors possible de calculer un modèle numérique texturé en temps-réel qui est utilisé pour assurer la présence de l'utilisateur dans l'espace numérique. Dans le cadre de cette thèse nous avons étudié comment coupler la capacité de présence fournie par un tel système avec une immersion visuelle et des interactions co-localisées. Ceci a mené à la réalisation d'une application qui couple un visio-casque, un système de suivi optique et un système multi-caméra. Ainsi l'utilisateur peut visualiser son modèle 3D correctement aligné avec son corps et mixé avec les objets virtuels. Nous avons aussi mis en place une expérimentation de télépresence sur 3 sites (Bordeaux, Grenoble, Orléans) qui permet à plusieurs utilisateurs de se rencontrer en 3D et de collaborer à distance. Le modèle 3D texturé donne une très forte impression de présence de l'autre et renforce les interactions physiques grâce au langage corporel et aux expressions faciales. Enfin, nous avons étudié comment extraire une information de vitesse à partir des informations issues des caméras, grâce au flot optique et à des correspondances 2D et 3D, nous pouvons estimer le déplacement dense du modèle 3D. Cette donnée étend les capacités d'interaction en enrichissant le modèle 3D.
APA, Harvard, Vancouver, ISO, and other styles
17

Kim, Jae-Hak. "Camera motion estimation for multi-camera systems /." View thesis entry in Australian Digital Theses Program, 2008. http://thesis.anu.edu.au/public/adt-ANU20081211.011120/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Macknojia, Rizwan. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Full text
Abstract:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO, and other styles
19

Šolony, Marek. "Lokalizace objektů v prostoru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236626.

Full text
Abstract:
Virtual reality systems are nowadays common part of many research institutes due to its low cost and effective visualization of data. They mostly allow visualization and exploration of virtual worlds, but many lack user interaction. In this paper we suggest multi-camera optical system, which allows effective user interaction, thereby increasing immersion of virtual system. This paper describes the calibration process of multiple cameras using point correspondences.
APA, Harvard, Vancouver, ISO, and other styles
20

Castanheiro, Letícia Ferrari. "Geometric model of a dual-fisheye system composed of hyper-hemispherical lenses /." Presidente Prudente, 2020. http://hdl.handle.net/11449/192117.

Full text
Abstract:
Orientador: Antonio Maria Garcia Tommaselli
Resumo: A combinação de duas lentes com FOV hiper-hemisférico em posição opostas pode gerar um sistema omnidirecional (FOV 360°) leve, compacto e de baixo custo, como Ricoh Theta S e GoPro Fusion. Entretanto, apenas algumas técnicas e modelos matemáticos para a calibração um sistema com duas lentes hiper-hemisféricas são apresentadas na literatura. Nesta pesquisa, é avaliado e definido um modelo geométrico para calibração de sistemas omnidirecionais compostos por duas lentes hiper-hemisféricas e apresenta-se algumas aplicações com esse tipo de sistema. A calibração das câmaras foi realizada no programa CMC (calibração de múltiplas câmeras) utilizando imagens obtidas a partir de vídeos feitos com a câmara Ricoh Theta S no campo de calibração 360°. A câmara Ricoh Theta S é composto por duas lentes hiper-hemisféricas fisheye que cobrem 190° cada uma. Com o objetivo de avaliar as melhorias na utilização de pontos em comum entre as imagens, dois conjuntos de dados de pontos foram considerados: (1) apenas pontos no campo hemisférico, e (2) pontos em todo o campo de imagem (isto é, adicionar pontos no campo de imagem hiper-hemisférica). Primeiramente, os modelos ângulo equisólido, equidistante, estereográfico e ortogonal combinados com o modelo de distorção Conrady-Brown foram testados para a calibração de um sensor da câmara Ricoh Theta S. Os modelos de ângulo-equisólido e estereográfico apresentaram resultados melhores do que os outros modelos. Portanto, esses dois modelos de projeção for... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The arrangement of two hyper-hemispherical fisheye lenses in opposite position can design a light weight, small and low-cost omnidirectional system (360° FOV), e.g. Ricoh Theta S and GoPro Fusion. However, only a few techniques are presented in the literature to calibrate a dual-fisheye system. In this research, a geometric model for dual-fisheye system calibration was evaluated, and some applications with this type of system are presented. The calibrating bundle adjustment was performed in CMC (calibration of multiple cameras) software by using the Ricoh Theta video frames of the 360° calibration field. The Ricoh Theta S system is composed of two hyper-hemispherical fisheye lenses with 190° FOV each one. In order to evaluate the improvement in applying points in the hyper-hemispherical image field, two data set of points were considered: (1) observations that are only in the hemispherical field, and (2) points in all image field, i.e. adding points in the hyper-hemispherical image field. First, one sensor of the Ricoh Theta S system was calibrated in a bundle adjustment based on the equidistant, equisolid-angle, stereographic and orthogonal models combined with Conrady-Brown distortion model. Results showed that the equisolid-angle and stereographic models can provide better solutions than those of the others projection models. Therefore, these two projection models were implemented in a simultaneous camera calibration, in which the both Ricoh Theta sensors were considered i... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
21

Hammarlund, Emil. "Target-less and targeted multi-camera color calibration." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33876.

Full text
Abstract:
Multiple camera arrays are beginning to see more widespread use in a variety of different applications, be it for research purposes or for enhancing the view- ing experience in entertainment. However, when using multiple cameras the images produced are often not color consistent due to a variety of different rea- sons such as differences in lighting, chip-level differences e.t.c. To address this there exists a multitude of different color calibration algorithms. This paper ex- amines two different color calibration algorithms one targeted and one target- less. Both methods were implemented in Python using the libraries OpenCV, Matplotlib, and NumPy. Once the algorithms had been implemented, they were evaluated based on two metrics; color range homogeneity and color ac- curacy to target values. The targeted color calibration algorithm was more ef- fective improving the color accuracy to ground truth then the target-less color calibration algorithm, but the target-less algorithm deteriorated the color range homogeneity less than the targeted color calibration algorithm. After both methods where tested, an improvement of the targeted color calibration al- gorithm was attempted. The resulting images were then evaluated based on the same two criteria as before, the modified version of the targeted color cal- ibration algorithm performed better than the original targeted algorithm with respect to color range homogeneity while still maintaining a similar level of performance with respect to color accuracy to ground truth as before. Further- more, when the color range homogeneity of the modified targeted algorithm was compared with the color range homogeneity of the target-less algorithm. The performance of the modified targeted algorithm performed similarly to the target-less algorithm. Based on these results, it was concluded that the targeted color calibration was superior to the target-less algorithm.
APA, Harvard, Vancouver, ISO, and other styles
22

Krucki, Kevin C. "Person Re-identification in Multi-Camera Surveillance Systems." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1448997579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Jiang, Xiaoyan [Verfasser]. "Multi-Object Tracking-by-Detection Using Multi-Camera Systems / Xiaoyan Jiang." München : Verlag Dr. Hut, 2016. http://d-nb.info/1084385325/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Persson, Thom. "Building of a Stereo Camera System." Thesis, Blekinge Tekniska Högskola, Avdelningen för signalbehandling, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3579.

Full text
Abstract:
This project consists of a prototype of a stereo camera rig where you can mount two DSLR cameras, and a multithreaded software application, written in C++, that can move the cameras, change camera settings and take pictures. The resulting 3D-images can be viewed with a 2-view autostereoscopic display. Camera position is controlled by a step engine which is controlled by a PIC microcontroller. All communication with the PIC and the computer is made over USB. The camera shutters are synchronized so it is possible to take pictures of moving objects at a distance of 2.5 m or more. The results shows that there are several things to do before the prototype can be considered a product ready for the market, most of all the camera callback functionality.
Detta projekt består av en stereokamerarigg som kan bestyckas med två DSLR-kameror, samt en applikation indelad i flera trådar (multithreaded) , skriven i C++, som kan förflytta kamerorna på riggen, ändra fotoinställningar och ra bilder. Resultatet blir 3D-bilder som kan ses på en autostereoskopisk skärm. Kamerornas position kontrolleras med en stegmotor, som i sin tur styrs av en PIC-mikrokontroller. Kommunikationen mellan PIC-enheten och datorn sker via USB. Slutarna på kamerorna är synkroniserade så det är möjligt att ta bilder på objekt i rörelse på ett avstånd av 2,5 m eller mer. Resultaten visar att det är flera punkter som måste åtgärdas på prototypen innan den kan anses vara redo för marknaden. Den viktigaste punkten är att kunna få fungerande respons (callback) från kamerorna.
APA, Harvard, Vancouver, ISO, and other styles
25

Nadella, Suman. "Multi camera stereo and tracking patient motion for SPECT scanning systems." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082905-161037/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Feature matching in multiple cameras; Multi camera stereo computation; Patient Motion Tracking; SPECT Imaging Includes bibliographical references. (p.84-88)
APA, Harvard, Vancouver, ISO, and other styles
26

Knorr, Moritz [Verfasser]. "Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing / Moritz Knorr." Karlsruhe : KIT Scientific Publishing, 2018. http://www.ksp.kit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sankaranarayanan, Aswin C. "Robust and efficient inference of scene and object motion in multi-camera systems." College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9855.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2009.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
28

Mhiri, Rawia. "Approches 2D/2D pour le SFM à partir d'un réseau de caméras asynchrones." Thesis, Rouen, INSA, 2015. http://www.theses.fr/2015ISAM0014/document.

Full text
Abstract:
Les systèmes d'aide à la conduite et les travaux concernant le véhicule autonome ont atteint une certaine maturité durant ces dernières aimées grâce à l'utilisation de technologies avancées. Une étape fondamentale pour ces systèmes porte sur l'estimation du mouvement et de la structure de l'environnement (Structure From Motion) pour accomplir plusieurs tâches, notamment la détection d'obstacles et de marquage routier, la localisation et la cartographie. Pour estimer leurs mouvements, de tels systèmes utilisent des capteurs relativement chers. Pour être commercialisés à grande échelle, il est alors nécessaire de développer des applications avec des dispositifs bas coûts. Dans cette optique, les systèmes de vision se révèlent une bonne alternative. Une nouvelle méthode basée sur des approches 2D/2D à partir d'un réseau de caméras asynchrones est présentée afin d'obtenir le déplacement et la structure 3D à l'échelle absolue en prenant soin d'estimer les facteurs d'échelle. La méthode proposée, appelée méthode des triangles, se base sur l'utilisation de trois images formant un triangle : deux images provenant de la même caméra et une image provenant d'une caméra voisine. L'algorithme admet trois hypothèses: les caméras partagent des champs de vue communs (deux à deux), la trajectoire entre deux images consécutives provenant d'une même caméra est approximée par un segment linéaire et les caméras sont calibrées. La connaissance de la calibration extrinsèque entre deux caméras combinée avec l'hypothèse de mouvement rectiligne du système, permet d'estimer les facteurs d'échelle absolue. La méthode proposée est précise et robuste pour les trajectoires rectilignes et présente des résultats satisfaisants pour les virages. Pour affiner l'estimation initiale, certaines erreurs dues aux imprécisions dans l'estimation des facteurs d'échelle sont améliorées par une méthode d'optimisation : un ajustement de faisceaux local appliqué uniquement sur les facteurs d'échelle absolue et sur les points 3D. L'approche présentée est validée sur des séquences de scènes routières réelles et évaluée par rapport à la vérité terrain obtenue par un GPS différentiel. Une application fondamentale dans les domaines d'aide à la conduite et de la conduite automatisée est la détection de la route et d'obstacles. Pour un système asynchrone, une première approche pour traiter cette application est présentée en se basant sur des cartes de disparité éparses
Driver assistance systems and autonomous vehicles have reached a certain maturity in recent years through the use of advanced technologies. A fundamental step for these systems is the motion and the structure estimation (Structure From Motion) that accomplish several tasks, including the detection of obstacles and road marking, localisation and mapping. To estimate their movements, such systems use relatively expensive sensors. In order to market such systems on a large scale, it is necessary to develop applications with low cost devices. In this context, vision systems is a good alternative. A new method based on 2D/2D approaches from an asynchronous multi-camera network is presented to obtain the motion and the 3D structure at the absolute scale, focusing on estimating the scale factors. The proposed method, called Triangle Method, is based on the use of three images forming a. triangle shape: two images from the same camera and an image from a neighboring camera. The algorithrn has three assumptions: the cameras share common fields of view (two by two), the path between two consecutive images from a single camera is approximated by a line segment, and the cameras are calibrated. The extrinsic calibration between two cameras combined with the assumption of rectilinear motion of the system allows to estimate the absolute scale factors. The proposed method is accurate and robust for straight trajectories and present satisfactory results for curve trajectories. To refine the initial estimation, some en-ors due to the inaccuracies of the scale estimation are improved by an optimization method: a local bundle adjustment applied only on the absolute scale factors and the 3D points. The presented approach is validated on sequences of real road scenes, and evaluated with respect to the ground truth obtained through a differential GPS. Finally, another fundamental application in the fields of driver assistance and automated driving is road and obstacles detection. A method is presented for an asynchronous system based on sparse disparity maps
APA, Harvard, Vancouver, ISO, and other styles
29

Knorr, Moritz [Verfasser], and C. [Akademischer Betreuer] Stiller. "Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing / Moritz Knorr ; Betreuer: C. Stiller." Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1154856798/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Esquivel, Sandro [Verfasser]. "Eye-to-Eye Calibration - Extrinsic Calibration of Multi-Camera Systems Using Hand-Eye Calibration Methods / Sandro Esquivel." Kiel : Universitätsbibliothek Kiel, 2015. http://d-nb.info/1073150615/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lamprecht, Bernhard. "A testbed for vision based advanced driver assistance systems with special emphasis on multi-camera calibration and depth perception /." Aachen : Shaker, 2008. http://d-nb.info/990314847/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lamprecht, Bernhard [Verfasser]. "A Testbed for Vision-based Advanced Driver Assistance Systems with Special Emphasis on Multi-Camera Calibration and Depth Perception / Bernhard Lamprecht." Aachen : Shaker, 2008. http://d-nb.info/1161303995/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Esparza, García José Domingo [Verfasser], and Bernd [Akademischer Betreuer] Jähne. "3D Reconstruction for Optimal Representation of Surroundings in Automotive HMIs, Based on Fisheye Multi-Camera Systems / José Domingo Esparza García ; Betreuer: Bernd Jähne." Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180501810/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Mennillo, Laurent. "Reconstruction 3D de l'environnement dynamique d'un véhicule à l'aide d'un système multi-caméras hétérogène en stéréo wide-baseline." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC022/document.

Full text
Abstract:
Cette thèse a été réalisée dans le secteur de l'industrie automobile, en collaboration avec le Groupe Renault et concerne en particulier le développement de systèmes d'aide à la conduite avancés et de véhicules autonomes. Les progrès réalisés par la communauté scientifique durant les dernières décennies, dans les domaines de l'informatique et de la robotique notamment, ont été si importants qu'ils permettent aujourd'hui la mise en application de systèmes complexes au sein des véhicules. Ces systèmes visent dans un premier temps à réduire les risques inhérents à la conduite en assistant les conducteurs, puis dans un second temps à offrir des moyens de transport entièrement autonomes. Les méthodes de SLAM multi-objets actuellement intégrées au sein de ces véhicules reposent pour majeure partie sur l'utilisation de capteurs embarqués très performants tels que des télémètres laser, au coût relativement élevé. Les caméras numériques en revanche, de par leur coût largement inférieur, commencent à se démocratiser sur certains véhicules de grande série et assurent généralement des fonctions d'assistance à la conduite, pour l'aide au parking ou le freinage d'urgence, par exemple. En outre, cette implantation plus courante permet également d'envisager leur utilisation afin de reconstruire l'environnement dynamique proche des véhicules en trois dimensions. D'un point de vue scientifique, les techniques de SLAM visuel multi-objets existantes peuvent être regroupées en deux catégories de méthodes. La première catégorie et plus ancienne historiquement concerne les méthodes stéréo, faisant usage de plusieurs caméras à champs recouvrants afin de reconstruire la scène dynamique observée. La plupart reposent en général sur l'utilisation de paires stéréo identiques et placées à faible distance l'une de l'autre, ce qui permet un appariement dense des points d'intérêt dans les images et l'estimation de cartes de disparités utilisées lors de la segmentation du mouvement des points reconstruits. L'autre catégorie de méthodes, dites monoculaires, ne font usage que d'une unique caméra lors du processus de reconstruction. Cela implique la compensation du mouvement propre du système d'acquisition lors de l'estimation du mouvement des autres objets mobiles de la scène de manière indépendante. Plus difficiles, ces méthodes posent plusieurs problèmes, notamment le partitionnement de l'espace de départ en plusieurs sous-espaces représentant les mouvements individuels de chaque objet mobile, mais aussi le problème d'estimation de l'échelle relative de reconstruction de ces objets lors de leur agrégation au sein de la scène statique. La problématique industrielle de cette thèse, consistant en la réutilisation des systèmes multi-caméras déjà implantés au sein des véhicules, majoritairement composés d'un caméra frontale et de caméras surround équipées d'objectifs très grand angle, a donné lieu au développement d'une méthode de reconstruction multi-objets adaptée aux systèmes multi-caméras hétérogènes en stéréo wide-baseline. Cette méthode est incrémentale et permet la reconstruction de points mobiles éparses, grâce notamment à plusieurs contraintes géométriques de segmentation des points reconstruits ainsi que de leur trajectoire. Enfin, une évaluation quantitative et qualitative des performances de la méthode a été menée sur deux jeux de données distincts, dont un a été développé durant ces travaux afin de présenter des caractéristiques similaires aux systèmes hétérogènes existants
This Ph.D. thesis, which has been carried out in the automotive industry in association with Renault Group, mainly focuses on the development of advanced driver-assistance systems and autonomous vehicles. The progress made by the scientific community during the last decades in the fields of computer science and robotics has been so important that it now enables the implementation of complex embedded systems in vehicles. These systems, primarily designed to provide assistance in simple driving scenarios and emergencies, now aim to offer fully autonomous transport. Multibody SLAM methods currently used in autonomous vehicles often rely on high-performance and expensive onboard sensors such as LIDAR systems. On the other hand, digital video cameras are much cheaper, which has led to their increased use in newer vehicles to provide driving assistance functions, such as parking assistance or emergency braking. Furthermore, this relatively common implementation now allows to consider their use in order to reconstruct the dynamic environment surrounding a vehicle in three dimensions. From a scientific point of view, existing multibody visual SLAM techniques can be divided into two categories of methods. The first and oldest category concerns stereo methods, which use several cameras with overlapping fields of view in order to reconstruct the observed dynamic scene. Most of these methods use identical stereo pairs in short baseline, which allows for the dense matching of feature points to estimate disparity maps that are then used to compute the motions of the scene. The other category concerns monocular methods, which only use one camera during the reconstruction process, meaning that they have to compensate for the ego-motion of the acquisition system in order to estimate the motion of other objects. These methods are more difficult in that they have to address several additional problems, such as motion segmentation, which consists in clustering the initial data into separate subspaces representing the individual movement of each object, but also the problem of the relative scale estimation of these objects before their aggregation within the static scene. The industrial motive for this work lies in the use of existing multi-camera systems already present in actual vehicles to perform dynamic scene reconstruction. These systems, being mostly composed of a front camera accompanied by several surround fisheye cameras in wide-baseline stereo, has led to the development of a multibody reconstruction method dedicated to such heterogeneous systems. The proposed method is incremental and allows for the reconstruction of sparse mobile points as well as their trajectory using several geometric constraints. Finally, a quantitative and qualitative evaluation conducted on two separate datasets, one of which was developed during this thesis in order to present characteristics similar to existing multi-camera systems, is provided
APA, Harvard, Vancouver, ISO, and other styles
35

Howard, Shaun Michael. "Deep Learning for Sensor Fusion." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1495751146601099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
37

Hsu, Ho-Jan, and 許賀然. "An Integrated Multi-Camera Surveillance System." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/10041404081015098461.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
96
In recent year the growth of smart digital surveillance system is boom. Except for traditional surveillance functions, a new multi-camera surveillance system has functions of automatic detection, tracing moving objects, identifying the moving object and behavior analysis. New functions focus on anomaly detection and object identification but are lack of the relations between each camera and the functions are also lack of the capability of querying about history surveillance pictures. Therefore the paper proposes an environmental surveillance system which can relate each camera and integrate functions of object tracing, object identifying, recording and querying object features. The proposed multi-camera surveillance system is able to execute real time security protection and emergency management. The paper is based on computer vision to use several cameras to construct an environmental surveillance system. The cameras can be deployed in any kind of place, for example, to the security management of place with bad social order, to residence and to office buildings. The experiment has proved that through the connections between cameras build by the system in a real scene, the system is able to record object features sufficiently, to trace object in real time, to reduce the time for querying history records and to increase the capability of emergency management.
APA, Harvard, Vancouver, ISO, and other styles
38

Kim, Jae-Hak. "Camera Motion Estimation for Multi-Camera Systems." Phd thesis, 2008. http://hdl.handle.net/1885/49364.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. ...
APA, Harvard, Vancouver, ISO, and other styles
39

Chou, Jay, and 周節. "Multi-view Face Detection for Multi-camera Surveillance System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/40357075267860486315.

Full text
Abstract:
碩士
國立交通大學
電子研究所
99
In this paper, we propose a multi-view face detection system, which is capable of detecting all targets’ faces in the given images and is able to illustrate the bird-eye view direction of each face in the 3-D space in a multi-camera surveillance system. Unlike existing approaches, the proposed system does not directly detect targets over the 2-D image domain nor project the 2-D detection results back to the 3-D space for correspondence. Instead, our system searches for the targets over small cubes in the 3-D space. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. This approach can help us to efficiently combine 2-D information from different camera views and to suppress the ambiguity caused by 2-D detection errors.
APA, Harvard, Vancouver, ISO, and other styles
40

Lyu, Hua-Lun, and 呂華綸. "Video Composition System using Multi-Camera Configuration." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/64091373356586822748.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
94
As digital videos become more and more popular nowadays, the application of videos in different fields has been wide spreading. The researches had turned from simply getting the shots to using techniques like abstraction, summarization to display the exciting performances in the clips; or even increasing audio techniques to let users listen to music while watching the films. However, among these developments, the filming still relies on only one video camera. When filming, it makes it impossible to catch the performance and a close-up on the performers at the same time. Therefore, using multiple video cameras to film can achieve the expectations of letting the users capture bounteous contents and close-ups. The thesis takes multi-view based video as the foundation to build up the automatic video editing system. There are two important issues for video composition: video synchronization and video switching. Video synchronization is to match the time of the videos from different viewing directions to the global time axis. The system firstly uses abrupt video shot detection to segment the abrupt shots of the captured videos, and then uses the velocity curve similarity to search the synchro-point. The goal of video switching is to retrieve different contents of videos to appeal the users and allow them to watch the attention shots of videos. We designed three shots that based on the contents for the system, and we categorized these three shots in considering the parts that users will take notice of on videos, for example: camera motion shot, face shot, and fragment shot; we calculate the importance of each shot to determine whether those shots should be selected into the compositive film or not. The experiments uses ball games as the filming content, and they contain the conditions of the different viewing angles, different content based shot weighting, the environmental change of indoors and outdoors, and the filming of a lot of people. Also, we analyze the synchronization of the film and the importance of the shots in different circumstances.
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Wei-Min, and 楊偉民. "People Tracking in a Multi-Camera System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/10535676919106613563.

Full text
Abstract:
碩士
淡江大學
資訊工程學系碩士班
98
This thesis presents a system for tracking a target of interest across in multi-camera system. The analysis includes three parts: the first part is object segmentation by Bayesian model. The second part is object tracking. Using the object segmentation results and Mean-Shift to track the target we interested in current camera. Last part is collaborate information of each camera for tracking the target in multi-camera. Developing system provides users define the multi-camera system environment as they want, and video browsing interface lets users choose the interested target, finally showing the result helps them to know the target trajectories quickly. Experiment is used for three surveillance cameras in outdoor environment that recorded for one hour. And we will discuss the problems and solutions in realizing our system.
APA, Harvard, Vancouver, ISO, and other styles
42

Quan-Wei, Zheng, and 鄭權偉. "An Integrated Multi-Camera Vehicle Tracking System." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/85147292056173216049.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
96
With advance of technology, almost every household in our society today owns more than one car, and that is the reason that brings about our traffic jam. Therefore, when there is accident or one is to investigate suspicious cars in various crimes, it has become very difficult. As a result, when there is any accident or event, the police always have to spend a great amount of manpower and resources to find out the suspicious cars before clues to resolve the case can be obtained. This paper used computer vision technology to develop an integrated multi-camera vehicle tracking system. And vehicle information detected on the system will be transmitted to the network database. Then, relevant personnel can use the web browser to inquire the record results so that users can quickly discover the car they are after. The proposed system can follow cars of different size, and colors. In addition, the system can also lock on the possible route of transportation on the car, and provide information of tracking-down to relevant personnel or authorities for employment. The design of the proposed system, in comparison to other systems, is given with advantages of fast speed, simplicity, tracking-down on several cars concurrently, detection at several road junctions, detection of vehicle color, and significant efficiency in real-time detection. With tests of several road films, the system has proven with favorable results, and it is believed the system can be most helpful to track down on vehicles.
APA, Harvard, Vancouver, ISO, and other styles
43

HSIEH, YI-YUN, and 謝易耘. "Multi-camera fusion based Bead Wire Measurement System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/w6tks5.

Full text
Abstract:
碩士
國立雲林科技大學
工程科技研究所
105
Most bead wire measures its inner circumference by contact measuring instrument based on PLC. Because the outer layer of bead wire is wrapped around the rubber. The bead wire is often deformed by the Ladder plate distraction. In this paper, multi-camera fusion based on bead wire measurement system measured in a non-contact manner is proposed. Because the size of bead wire is about 400 mm  400 mm. The accuracy required for the bead wire can’t be achieved by the resolution of the single camera. In this paper, the image of multiple cameras will be stitched into a merger image through the camera projection geometry to obtain a high-resolution image with a large field of view. In this study, the size of checkerboard is 370 mm  370 mm. The measurement of checkerboard’s average error and the standard deviation is 0.035 mm and 0.047, respectively. The standard deviation of the 14-inch bead wire is 0.086, the standard deviation of the 15-inch bead wire is 0.145, and the standard deviation of the 14-inch bead wire is 0.247.
APA, Harvard, Vancouver, ISO, and other styles
44

Chu, Ming-Chu, and 朱明初. "Multi-Camera Vehicle Identification in Tunnel Surveillance System." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/44075939369656614391.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
101
Surveillance cameras are widely equipped in tunnels to monitor the traffic condition and traffic safety issues. Identifying vehicles from multiple cameras within a tunnel automatically is essential to analyze traffic condition through the road. This thesis proposes a multi-camera vehicle identification system for tunnel surveillance videos. Vehicles are detected using Haar-like feature detector and their image features are extracted using OpponentSIFT descriptor in single camera. The proposed Spatiotemporal Successive Dynamic Programming (S2DP) algorithm identifies vehicles from two cameras by considering the ordering constraint in the tunnel environment. Next, two methods Real-Time (RT) algorithm and Offline Refinement (OR) algorithm are proposed for different requirements. The RT fast identifies vehicles in real-time by searching a limited range of candidates, and the OR refines the identification result from the S2DP. Comprehensive experiments on various datasets demonstrate the satisfactory performance of the proposed multi-camera vehicle identification methods, which outperform state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
45

Peng, Yi-Hong, and 彭依弘. "The Design of Multi-Object Tracking System in a Multi-Camera Network." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/zpmg55.

Full text
Abstract:
碩士
國立交通大學
電控工程研究所
104
Nowadays, in the field of the public security surveillance environment, surveillance cameras are often used to record the societal security and criminal events. However, there are more surveillance cameras when the supervisors browse the video after events happen. It will cause a lot of times and human resources. According to the above-mentioned problems, this thesis designs a system which tracks the multi-object in a multi-camera network. The users can choose the objects from the video chips and the system will track them across different cameras. There are three contributions in this thesis. First, this thesis proposes a feature modulation mechanism. It can help the system track different objects accurately. Second, this thesis proposes a switching multi-camera mechanism. Though the architecture of the multi-camera network, the system determines the next camera which the objects will appear to improve the tracking efficiency. Third, this thesis completes the prototype of the multi-object in a multi-camera network. Then the system integrates the information of objects and cameras into the monitor system and reduces the burden which supervisors investigate video afterwards.
APA, Harvard, Vancouver, ISO, and other styles
46

Zheng, Sicong. "Pixel-level Image Fusion Algorithms for Multi-camera Imaging System." 2010. http://trace.tennessee.edu/utk_gradthes/848.

Full text
Abstract:
This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation ability
APA, Harvard, Vancouver, ISO, and other styles
47

Chu, Che Yu, and 褚哲宇. "Research on Calibration and Object Tracking of Multi-Camera System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/82863838785752771138.

Full text
Abstract:
碩士
長庚大學
醫療機電工程研究所
97
While we were using the traditional stereo-camera tracking system, the object could been easily occluded or out of the field of the camera view. The purpose of this research is to develop a multi-camera system, which can apply to surgical navigation system to tracking object. This research is use two-dimension calibrated method to calibrating the multi-camera system, and set all the camera coordinate system to correspond to one world coordinate system. The research is use the LED light ball to be the marker. The multi-camera system is use four cameras to get different field of view, and that will get more information than only two cameras. Then design the program which can calculate the weight of each camera system and choose the best one to continuing tracking. The system is improve the computer efficiency and the robust of the object tracking.
APA, Harvard, Vancouver, ISO, and other styles
48

Hsiao, Ching-Chun, and 蕭晴駿. "Model-Based Pose Estimation for Multi-Camera Motion Capture System." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/79385950596603901992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hwang, Chien-Yao, and 黃建堯. "Implementation of Multi-Camera Cooperative Handover in a Surveillance System." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/43922425175410100102.

Full text
Abstract:
碩士
國立中正大學
電機工程所
97
Surveillance systems grow up rapidly in the security industry. A security surveillance system is used as a means of monitoring abnormal events. However, multiple cameras are always needed in a surveillance environment, such that it becomes impractical to obtain available human resources to operate an effective surveillance. Because a person can not focus on monitoring limited multi-screen at a time, we design a multi-camera cooperative handover method for a surveillance system to achieve the capability of seamless monitoring. The objective of the research is to develop a multi-camera cooperative handover method in a surveillance system with real-time tracking. The camera can track a moving target so as to keep it within camera view all the time. The control station can control the cooperation between multi-cameras, including harmonizing multi-camera based on spatial relationships. The research also proposes to an effective deployment method for cameras. The proposed system has been tested in the simulated situation, and experimental results demonstrate the affectivity of target tracking in the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
50

Hsieh, Chia-Chun, and 謝佳峻. "A Study on Ray-Space Interpolation for Multi-camera System." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/87259885057267163905.

Full text
Abstract:
碩士
國立交通大學
電子工程學系 電子研究所
102
We need to have complicated flew in the traditional 3D-model.for example, getting matching points, projecting in the 3D space, building cloud points, matching cloud points, meshing cloud points, and taking image to mesh, all step are complicated. What can be changed in this method ? In this thesis, we use a model,named Ray-space. It used the ray in the real world, and used the direction to build Ray-space. Everything in the real world had only one show points in the Ray-space. First, we found the matching points by two cameras. Second, we use the matching points to get the parameter in the Ray-space.Third, we use the epipolar plane image to let Ray-space completely.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography