Rozprawy doktorskie na temat „Multi-Camera System”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Multi-Camera System”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Vibeck, Alexander. "Synchronization of a Multi Camera System". Thesis, Linköpings universitet, Datorseende, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119408.
Pełny tekst źródłaiQMatic
Kim, Jae-Hak, i Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems". The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.
Pełny tekst źródłaMortensen, Daniel T. "Foreground Removal in a Multi-Camera System". DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7669.
Pełny tekst źródłaZhou, Han, i 周晗. "Intelligent video surveillance in a calibrated multi-camera system". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45989217.
Pełny tekst źródłaÅkesson, Ulrik. "Design of a multi-camera system for object identification, localisation, and visual servoing". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44082.
Pełny tekst źródłaTuresson, Eric. "Multi-camera Computer Vision for Object Tracking: A comparative study". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21810.
Pełny tekst źródłaBachnak, Rafic A. "Development of a stereo-based multi-camera system for 3-D vision". Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1172005477.
Pełny tekst źródłaBecklinger, Nicole Lynn. "Design and test of a multi-camera based orthorectified airborne imaging system". Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/461.
Pełny tekst źródłaBeriault, Silvain. "Multi-camera system design, calibration and three-dimensional reconstruction for markerless motion capture". Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27957.
Pełny tekst źródłaSantos, de Freitas Rafael Luiz. "MULTI-CAMERA SURVEILLANCE SYSTEM FOR TIME AND MOTION STUDIES OF TIMBER HARVESTING OPERATIONS". UKnowledge, 2019. https://uknowledge.uky.edu/forestry_etds/48.
Pełny tekst źródłaHONDA, Toshio, Toshiaki FUJII i Tadahiko HAMAGUCHI. "Real-Time View-Interpolation System for Super Multi-View 3D Display". Institute of Electronics, Information and Communication Engineers, 2003. http://hdl.handle.net/2237/14998.
Pełny tekst źródłaAykin, Murat Deniz. "Efficient Calibration Of A Multi-camera Measurement System Using A Target With Known Dynamics". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609798/index.pdf.
Pełny tekst źródłastate&rdquo
of one or more real world objects. Camera calibration is the process of pre-determining all the remaining optical and geometric parameters of the measurement system which are either static or slowly varying. For a single camera, this consist of the internal parameters of the camera device optics and construction while for a multiple camera system, it also includes the geometric positioning of the individual cameras, namely &ldquo
external&rdquo
parameters. The calibration is a necessary step before any actual state measurements can be made from the system. In this thesis, such a multi-camera state measurement system and in particular the problem of procedurally effective and high performance calibration of such a system is considered. This thesis presents a novel calibration algorithm which uses the known dynamics of a ballistically thrown target object and employs the Extended Kalman Filter (EKF) to calibrate the multi-camera system. The state-space representation of the target state is augmented with the unknown calibration parameters which are assumed to be static or slowly varying with respect to the state. This results in a &ldquo
super-state&rdquo
vector. The EKF algorithm is used to recursively estimate this super-state hence resulting in the estimates of the static camera parameters. It is demonstrated by both simulation studies as well as actual experiments that when the ballistic path of the target is processed by the improved versions of the EKF algorithm, the camera calibration parameter estimates asymptotically converge to their actual values. Since the image frames of the target trajectory can be acquired first and then processed off-line, subsequent improvements of the EKF algorithm include repeated and bidirectional versions where the same calibration images are repeatedly used. Repeated EKF (R-EKF) provides convergence with a limited number of image frames when the initial target state is accurately provided while its bidirectional version (RB-EKF) improves calibration accuracy by also estimating the initial target state. The primary contribution of the approach is that it provides a fast calibration procedure where there is no need for any standard or custom made calibration target plates covering the majority of camera field-of-view. Also, human assistance is minimized since all frame data is processed automatically and assistance is limited to making the target throws. The speed of convergence and accuracy of the results promise a field-applicable calibration procedure.
Schneider, Johannes [Verfasser]. "Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System / Johannes Schneider". Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1190818558/34.
Pełny tekst źródłaSchneider, Johannes [Verfasser]. "Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System / Johannes Schneider". Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1217404635/34.
Pełny tekst źródłaSabino, Danilo Damasceno. "Development of a 3D multi-camera measurement system based on image stitching techniques applied for dynamic measurements of large structures". Ilha Solteira, 2018. http://hdl.handle.net/11449/157103.
Pełny tekst źródłaResumo: O objetivo específico deste trabalho é estender as capacidades da técnica de rastreamento de pontos em 3 dimensões (three-dimensional point tracking – 3DPT) para identificar as características dinâmicas de estruturas grandes e complexas, tais como pás de turbina eólica. Um sistema multi-camera (composto de múltiplos sistemas de estéreo visão calibrados independentemente) é desenvolvido para obter alta resolução espacial de pontos discretos a partir de medidas de deslocamento sobre grandes áreas. Uma proposta de técnica de costura é apresentada e empregada para executar o alinhamento de duas nuvens de pontos, obtidas com a técnica 3DPT, de uma estrutura sob excitação dinâmica. Três diferentes algoritmos de registro de nuvens de pontos são propostos para executar a junção das nuvens de pontos de cada sistema estéreo, análise de componentes principais (Principal Component Analysis - PCA), decomposição de valores singulares (Singular value Decomposition - SVD) e ponto mais próximo iterativo (Iterative Closest Point - ICP). Além disso, análise modal operacional em conjunto com o sistema de medição multi-camera e as técnicas de registro de nuvens de pontos são usadas para determinar a viabilidade de usar medidas ópticas (e.g. three-dimensional point tracking – 3DPT) para estimar os parâmetros modais de uma pá de gerador eólico comparando seus resultados com técnicas de medição mais convencionais.
Abstract: The specific objective of this research is to extend the capabilities of three-dimensional (3D) Point Tracking (PT) to identify the dynamic characteristics of large and complex structures, such as utility-scale wind turbine blades. A multi-camera system (composed of multiple independently calibrated stereovision systems) is developed to obtain high spatial resolution of discrete points from displacement measurement over very large areas. A proposal of stitching techniques is presented and employed to perform the alignment of two point clouds, obtained with 3DPT measurement, of a structure under dynamic excitation. The point cloud registration techniques are exploited as a technique for dynamic measuring (displacement) of large structures with high spatial resolution of the model. Three different image registration algorithms are proposed to perform the junction of the points clouds of each stereo system, Principal Component Analysis (PCA), Singular value Decomposition (SVD) and Iterative Closest Point (ICP). Furthermore, operational modal analysis in conjunction with the multi-camera measurement system and registration techniques are used to determine the feasibility of using optical measurements (e.g. three-dimensional point tracking (3DPT)) to estimate the modal parameters of a utility-scale wind turbine blade by comparing with traditional techniques.
Doutor
Petit, Benjamin. "Téléprésence, immersion et interactions pour le reconstruction 3D temps-réel". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00584001.
Pełny tekst źródłaKim, Jae-Hak. "Camera motion estimation for multi-camera systems /". View thesis entry in Australian Digital Theses Program, 2008. http://thesis.anu.edu.au/public/adt-ANU20081211.011120/index.html.
Pełny tekst źródłaMacknojia, Rizwan. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces". Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.
Pełny tekst źródłaŠolony, Marek. "Lokalizace objektů v prostoru". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236626.
Pełny tekst źródłaCastanheiro, Letícia Ferrari. "Geometric model of a dual-fisheye system composed of hyper-hemispherical lenses /". Presidente Prudente, 2020. http://hdl.handle.net/11449/192117.
Pełny tekst źródłaResumo: A combinação de duas lentes com FOV hiper-hemisférico em posição opostas pode gerar um sistema omnidirecional (FOV 360°) leve, compacto e de baixo custo, como Ricoh Theta S e GoPro Fusion. Entretanto, apenas algumas técnicas e modelos matemáticos para a calibração um sistema com duas lentes hiper-hemisféricas são apresentadas na literatura. Nesta pesquisa, é avaliado e definido um modelo geométrico para calibração de sistemas omnidirecionais compostos por duas lentes hiper-hemisféricas e apresenta-se algumas aplicações com esse tipo de sistema. A calibração das câmaras foi realizada no programa CMC (calibração de múltiplas câmeras) utilizando imagens obtidas a partir de vídeos feitos com a câmara Ricoh Theta S no campo de calibração 360°. A câmara Ricoh Theta S é composto por duas lentes hiper-hemisféricas fisheye que cobrem 190° cada uma. Com o objetivo de avaliar as melhorias na utilização de pontos em comum entre as imagens, dois conjuntos de dados de pontos foram considerados: (1) apenas pontos no campo hemisférico, e (2) pontos em todo o campo de imagem (isto é, adicionar pontos no campo de imagem hiper-hemisférica). Primeiramente, os modelos ângulo equisólido, equidistante, estereográfico e ortogonal combinados com o modelo de distorção Conrady-Brown foram testados para a calibração de um sensor da câmara Ricoh Theta S. Os modelos de ângulo-equisólido e estereográfico apresentaram resultados melhores do que os outros modelos. Portanto, esses dois modelos de projeção for... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The arrangement of two hyper-hemispherical fisheye lenses in opposite position can design a light weight, small and low-cost omnidirectional system (360° FOV), e.g. Ricoh Theta S and GoPro Fusion. However, only a few techniques are presented in the literature to calibrate a dual-fisheye system. In this research, a geometric model for dual-fisheye system calibration was evaluated, and some applications with this type of system are presented. The calibrating bundle adjustment was performed in CMC (calibration of multiple cameras) software by using the Ricoh Theta video frames of the 360° calibration field. The Ricoh Theta S system is composed of two hyper-hemispherical fisheye lenses with 190° FOV each one. In order to evaluate the improvement in applying points in the hyper-hemispherical image field, two data set of points were considered: (1) observations that are only in the hemispherical field, and (2) points in all image field, i.e. adding points in the hyper-hemispherical image field. First, one sensor of the Ricoh Theta S system was calibrated in a bundle adjustment based on the equidistant, equisolid-angle, stereographic and orthogonal models combined with Conrady-Brown distortion model. Results showed that the equisolid-angle and stereographic models can provide better solutions than those of the others projection models. Therefore, these two projection models were implemented in a simultaneous camera calibration, in which the both Ricoh Theta sensors were considered i... (Complete abstract click electronic access below)
Mestre
Hammarlund, Emil. "Target-less and targeted multi-camera color calibration". Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33876.
Pełny tekst źródłaKrucki, Kevin C. "Person Re-identification in Multi-Camera Surveillance Systems". University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1448997579.
Pełny tekst źródłaJiang, Xiaoyan [Verfasser]. "Multi-Object Tracking-by-Detection Using Multi-Camera Systems / Xiaoyan Jiang". München : Verlag Dr. Hut, 2016. http://d-nb.info/1084385325/34.
Pełny tekst źródłaPersson, Thom. "Building of a Stereo Camera System". Thesis, Blekinge Tekniska Högskola, Avdelningen för signalbehandling, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3579.
Pełny tekst źródłaDetta projekt består av en stereokamerarigg som kan bestyckas med två DSLR-kameror, samt en applikation indelad i flera trådar (multithreaded) , skriven i C++, som kan förflytta kamerorna på riggen, ändra fotoinställningar och ra bilder. Resultatet blir 3D-bilder som kan ses på en autostereoskopisk skärm. Kamerornas position kontrolleras med en stegmotor, som i sin tur styrs av en PIC-mikrokontroller. Kommunikationen mellan PIC-enheten och datorn sker via USB. Slutarna på kamerorna är synkroniserade så det är möjligt att ta bilder på objekt i rörelse på ett avstånd av 2,5 m eller mer. Resultaten visar att det är flera punkter som måste åtgärdas på prototypen innan den kan anses vara redo för marknaden. Den viktigaste punkten är att kunna få fungerande respons (callback) från kamerorna.
Nadella, Suman. "Multi camera stereo and tracking patient motion for SPECT scanning systems". Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082905-161037/.
Pełny tekst źródłaKeywords: Feature matching in multiple cameras; Multi camera stereo computation; Patient Motion Tracking; SPECT Imaging Includes bibliographical references. (p.84-88)
Knorr, Moritz [Verfasser]. "Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing / Moritz Knorr". Karlsruhe : KIT Scientific Publishing, 2018. http://www.ksp.kit.edu.
Pełny tekst źródłaSankaranarayanan, Aswin C. "Robust and efficient inference of scene and object motion in multi-camera systems". College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9855.
Pełny tekst źródłaThesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Mhiri, Rawia. "Approches 2D/2D pour le SFM à partir d'un réseau de caméras asynchrones". Thesis, Rouen, INSA, 2015. http://www.theses.fr/2015ISAM0014/document.
Pełny tekst źródłaDriver assistance systems and autonomous vehicles have reached a certain maturity in recent years through the use of advanced technologies. A fundamental step for these systems is the motion and the structure estimation (Structure From Motion) that accomplish several tasks, including the detection of obstacles and road marking, localisation and mapping. To estimate their movements, such systems use relatively expensive sensors. In order to market such systems on a large scale, it is necessary to develop applications with low cost devices. In this context, vision systems is a good alternative. A new method based on 2D/2D approaches from an asynchronous multi-camera network is presented to obtain the motion and the 3D structure at the absolute scale, focusing on estimating the scale factors. The proposed method, called Triangle Method, is based on the use of three images forming a. triangle shape: two images from the same camera and an image from a neighboring camera. The algorithrn has three assumptions: the cameras share common fields of view (two by two), the path between two consecutive images from a single camera is approximated by a line segment, and the cameras are calibrated. The extrinsic calibration between two cameras combined with the assumption of rectilinear motion of the system allows to estimate the absolute scale factors. The proposed method is accurate and robust for straight trajectories and present satisfactory results for curve trajectories. To refine the initial estimation, some en-ors due to the inaccuracies of the scale estimation are improved by an optimization method: a local bundle adjustment applied only on the absolute scale factors and the 3D points. The presented approach is validated on sequences of real road scenes, and evaluated with respect to the ground truth obtained through a differential GPS. Finally, another fundamental application in the fields of driver assistance and automated driving is road and obstacles detection. A method is presented for an asynchronous system based on sparse disparity maps
Knorr, Moritz [Verfasser], i C. [Akademischer Betreuer] Stiller. "Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing / Moritz Knorr ; Betreuer: C. Stiller". Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1154856798/34.
Pełny tekst źródłaEsquivel, Sandro [Verfasser]. "Eye-to-Eye Calibration - Extrinsic Calibration of Multi-Camera Systems Using Hand-Eye Calibration Methods / Sandro Esquivel". Kiel : Universitätsbibliothek Kiel, 2015. http://d-nb.info/1073150615/34.
Pełny tekst źródłaLamprecht, Bernhard. "A testbed for vision based advanced driver assistance systems with special emphasis on multi-camera calibration and depth perception /". Aachen : Shaker, 2008. http://d-nb.info/990314847/04.
Pełny tekst źródłaLamprecht, Bernhard [Verfasser]. "A Testbed for Vision-based Advanced Driver Assistance Systems with Special Emphasis on Multi-Camera Calibration and Depth Perception / Bernhard Lamprecht". Aachen : Shaker, 2008. http://d-nb.info/1161303995/34.
Pełny tekst źródłaEsparza, García José Domingo [Verfasser], i Bernd [Akademischer Betreuer] Jähne. "3D Reconstruction for Optimal Representation of Surroundings in Automotive HMIs, Based on Fisheye Multi-Camera Systems / José Domingo Esparza García ; Betreuer: Bernd Jähne". Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180501810/34.
Pełny tekst źródłaMennillo, Laurent. "Reconstruction 3D de l'environnement dynamique d'un véhicule à l'aide d'un système multi-caméras hétérogène en stéréo wide-baseline". Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC022/document.
Pełny tekst źródłaThis Ph.D. thesis, which has been carried out in the automotive industry in association with Renault Group, mainly focuses on the development of advanced driver-assistance systems and autonomous vehicles. The progress made by the scientific community during the last decades in the fields of computer science and robotics has been so important that it now enables the implementation of complex embedded systems in vehicles. These systems, primarily designed to provide assistance in simple driving scenarios and emergencies, now aim to offer fully autonomous transport. Multibody SLAM methods currently used in autonomous vehicles often rely on high-performance and expensive onboard sensors such as LIDAR systems. On the other hand, digital video cameras are much cheaper, which has led to their increased use in newer vehicles to provide driving assistance functions, such as parking assistance or emergency braking. Furthermore, this relatively common implementation now allows to consider their use in order to reconstruct the dynamic environment surrounding a vehicle in three dimensions. From a scientific point of view, existing multibody visual SLAM techniques can be divided into two categories of methods. The first and oldest category concerns stereo methods, which use several cameras with overlapping fields of view in order to reconstruct the observed dynamic scene. Most of these methods use identical stereo pairs in short baseline, which allows for the dense matching of feature points to estimate disparity maps that are then used to compute the motions of the scene. The other category concerns monocular methods, which only use one camera during the reconstruction process, meaning that they have to compensate for the ego-motion of the acquisition system in order to estimate the motion of other objects. These methods are more difficult in that they have to address several additional problems, such as motion segmentation, which consists in clustering the initial data into separate subspaces representing the individual movement of each object, but also the problem of the relative scale estimation of these objects before their aggregation within the static scene. The industrial motive for this work lies in the use of existing multi-camera systems already present in actual vehicles to perform dynamic scene reconstruction. These systems, being mostly composed of a front camera accompanied by several surround fisheye cameras in wide-baseline stereo, has led to the development of a multibody reconstruction method dedicated to such heterogeneous systems. The proposed method is incremental and allows for the reconstruction of sparse mobile points as well as their trajectory using several geometric constraints. Finally, a quantitative and qualitative evaluation conducted on two separate datasets, one of which was developed during this thesis in order to present characteristics similar to existing multi-camera systems, is provided
Howard, Shaun Michael. "Deep Learning for Sensor Fusion". Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1495751146601099.
Pełny tekst źródłaVestin, Albin, i Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms". Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.
Pełny tekst źródłaHsu, Ho-Jan, i 許賀然. "An Integrated Multi-Camera Surveillance System". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/10041404081015098461.
Pełny tekst źródła亞洲大學
資訊工程學系碩士班
96
In recent year the growth of smart digital surveillance system is boom. Except for traditional surveillance functions, a new multi-camera surveillance system has functions of automatic detection, tracing moving objects, identifying the moving object and behavior analysis. New functions focus on anomaly detection and object identification but are lack of the relations between each camera and the functions are also lack of the capability of querying about history surveillance pictures. Therefore the paper proposes an environmental surveillance system which can relate each camera and integrate functions of object tracing, object identifying, recording and querying object features. The proposed multi-camera surveillance system is able to execute real time security protection and emergency management. The paper is based on computer vision to use several cameras to construct an environmental surveillance system. The cameras can be deployed in any kind of place, for example, to the security management of place with bad social order, to residence and to office buildings. The experiment has proved that through the connections between cameras build by the system in a real scene, the system is able to record object features sufficiently, to trace object in real time, to reduce the time for querying history records and to increase the capability of emergency management.
Kim, Jae-Hak. "Camera Motion Estimation for Multi-Camera Systems". Phd thesis, 2008. http://hdl.handle.net/1885/49364.
Pełny tekst źródłaChou, Jay, i 周節. "Multi-view Face Detection for Multi-camera Surveillance System". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/40357075267860486315.
Pełny tekst źródła國立交通大學
電子研究所
99
In this paper, we propose a multi-view face detection system, which is capable of detecting all targets’ faces in the given images and is able to illustrate the bird-eye view direction of each face in the 3-D space in a multi-camera surveillance system. Unlike existing approaches, the proposed system does not directly detect targets over the 2-D image domain nor project the 2-D detection results back to the 3-D space for correspondence. Instead, our system searches for the targets over small cubes in the 3-D space. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. This approach can help us to efficiently combine 2-D information from different camera views and to suppress the ambiguity caused by 2-D detection errors.
Lyu, Hua-Lun, i 呂華綸. "Video Composition System using Multi-Camera Configuration". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/64091373356586822748.
Pełny tekst źródła國立東華大學
資訊工程學系
94
As digital videos become more and more popular nowadays, the application of videos in different fields has been wide spreading. The researches had turned from simply getting the shots to using techniques like abstraction, summarization to display the exciting performances in the clips; or even increasing audio techniques to let users listen to music while watching the films. However, among these developments, the filming still relies on only one video camera. When filming, it makes it impossible to catch the performance and a close-up on the performers at the same time. Therefore, using multiple video cameras to film can achieve the expectations of letting the users capture bounteous contents and close-ups. The thesis takes multi-view based video as the foundation to build up the automatic video editing system. There are two important issues for video composition: video synchronization and video switching. Video synchronization is to match the time of the videos from different viewing directions to the global time axis. The system firstly uses abrupt video shot detection to segment the abrupt shots of the captured videos, and then uses the velocity curve similarity to search the synchro-point. The goal of video switching is to retrieve different contents of videos to appeal the users and allow them to watch the attention shots of videos. We designed three shots that based on the contents for the system, and we categorized these three shots in considering the parts that users will take notice of on videos, for example: camera motion shot, face shot, and fragment shot; we calculate the importance of each shot to determine whether those shots should be selected into the compositive film or not. The experiments uses ball games as the filming content, and they contain the conditions of the different viewing angles, different content based shot weighting, the environmental change of indoors and outdoors, and the filming of a lot of people. Also, we analyze the synchronization of the film and the importance of the shots in different circumstances.
Yang, Wei-Min, i 楊偉民. "People Tracking in a Multi-Camera System". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/10535676919106613563.
Pełny tekst źródła淡江大學
資訊工程學系碩士班
98
This thesis presents a system for tracking a target of interest across in multi-camera system. The analysis includes three parts: the first part is object segmentation by Bayesian model. The second part is object tracking. Using the object segmentation results and Mean-Shift to track the target we interested in current camera. Last part is collaborate information of each camera for tracking the target in multi-camera. Developing system provides users define the multi-camera system environment as they want, and video browsing interface lets users choose the interested target, finally showing the result helps them to know the target trajectories quickly. Experiment is used for three surveillance cameras in outdoor environment that recorded for one hour. And we will discuss the problems and solutions in realizing our system.
Quan-Wei, Zheng, i 鄭權偉. "An Integrated Multi-Camera Vehicle Tracking System". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/85147292056173216049.
Pełny tekst źródła亞洲大學
資訊工程學系碩士班
96
With advance of technology, almost every household in our society today owns more than one car, and that is the reason that brings about our traffic jam. Therefore, when there is accident or one is to investigate suspicious cars in various crimes, it has become very difficult. As a result, when there is any accident or event, the police always have to spend a great amount of manpower and resources to find out the suspicious cars before clues to resolve the case can be obtained. This paper used computer vision technology to develop an integrated multi-camera vehicle tracking system. And vehicle information detected on the system will be transmitted to the network database. Then, relevant personnel can use the web browser to inquire the record results so that users can quickly discover the car they are after. The proposed system can follow cars of different size, and colors. In addition, the system can also lock on the possible route of transportation on the car, and provide information of tracking-down to relevant personnel or authorities for employment. The design of the proposed system, in comparison to other systems, is given with advantages of fast speed, simplicity, tracking-down on several cars concurrently, detection at several road junctions, detection of vehicle color, and significant efficiency in real-time detection. With tests of several road films, the system has proven with favorable results, and it is believed the system can be most helpful to track down on vehicles.
HSIEH, YI-YUN, i 謝易耘. "Multi-camera fusion based Bead Wire Measurement System". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/w6tks5.
Pełny tekst źródła國立雲林科技大學
工程科技研究所
105
Most bead wire measures its inner circumference by contact measuring instrument based on PLC. Because the outer layer of bead wire is wrapped around the rubber. The bead wire is often deformed by the Ladder plate distraction. In this paper, multi-camera fusion based on bead wire measurement system measured in a non-contact manner is proposed. Because the size of bead wire is about 400 mm 400 mm. The accuracy required for the bead wire can’t be achieved by the resolution of the single camera. In this paper, the image of multiple cameras will be stitched into a merger image through the camera projection geometry to obtain a high-resolution image with a large field of view. In this study, the size of checkerboard is 370 mm 370 mm. The measurement of checkerboard’s average error and the standard deviation is 0.035 mm and 0.047, respectively. The standard deviation of the 14-inch bead wire is 0.086, the standard deviation of the 15-inch bead wire is 0.145, and the standard deviation of the 14-inch bead wire is 0.247.
Chu, Ming-Chu, i 朱明初. "Multi-Camera Vehicle Identification in Tunnel Surveillance System". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/44075939369656614391.
Pełny tekst źródła國立交通大學
資訊科學與工程研究所
101
Surveillance cameras are widely equipped in tunnels to monitor the traffic condition and traffic safety issues. Identifying vehicles from multiple cameras within a tunnel automatically is essential to analyze traffic condition through the road. This thesis proposes a multi-camera vehicle identification system for tunnel surveillance videos. Vehicles are detected using Haar-like feature detector and their image features are extracted using OpponentSIFT descriptor in single camera. The proposed Spatiotemporal Successive Dynamic Programming (S2DP) algorithm identifies vehicles from two cameras by considering the ordering constraint in the tunnel environment. Next, two methods Real-Time (RT) algorithm and Offline Refinement (OR) algorithm are proposed for different requirements. The RT fast identifies vehicles in real-time by searching a limited range of candidates, and the OR refines the identification result from the S2DP. Comprehensive experiments on various datasets demonstrate the satisfactory performance of the proposed multi-camera vehicle identification methods, which outperform state-of-the-art algorithms.
Peng, Yi-Hong, i 彭依弘. "The Design of Multi-Object Tracking System in a Multi-Camera Network". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/zpmg55.
Pełny tekst źródła國立交通大學
電控工程研究所
104
Nowadays, in the field of the public security surveillance environment, surveillance cameras are often used to record the societal security and criminal events. However, there are more surveillance cameras when the supervisors browse the video after events happen. It will cause a lot of times and human resources. According to the above-mentioned problems, this thesis designs a system which tracks the multi-object in a multi-camera network. The users can choose the objects from the video chips and the system will track them across different cameras. There are three contributions in this thesis. First, this thesis proposes a feature modulation mechanism. It can help the system track different objects accurately. Second, this thesis proposes a switching multi-camera mechanism. Though the architecture of the multi-camera network, the system determines the next camera which the objects will appear to improve the tracking efficiency. Third, this thesis completes the prototype of the multi-object in a multi-camera network. Then the system integrates the information of objects and cameras into the monitor system and reduces the burden which supervisors investigate video afterwards.
Zheng, Sicong. "Pixel-level Image Fusion Algorithms for Multi-camera Imaging System". 2010. http://trace.tennessee.edu/utk_gradthes/848.
Pełny tekst źródłaChu, Che Yu, i 褚哲宇. "Research on Calibration and Object Tracking of Multi-Camera System". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/82863838785752771138.
Pełny tekst źródła長庚大學
醫療機電工程研究所
97
While we were using the traditional stereo-camera tracking system, the object could been easily occluded or out of the field of the camera view. The purpose of this research is to develop a multi-camera system, which can apply to surgical navigation system to tracking object. This research is use two-dimension calibrated method to calibrating the multi-camera system, and set all the camera coordinate system to correspond to one world coordinate system. The research is use the LED light ball to be the marker. The multi-camera system is use four cameras to get different field of view, and that will get more information than only two cameras. Then design the program which can calculate the weight of each camera system and choose the best one to continuing tracking. The system is improve the computer efficiency and the robust of the object tracking.
Hsiao, Ching-Chun, i 蕭晴駿. "Model-Based Pose Estimation for Multi-Camera Motion Capture System". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/79385950596603901992.
Pełny tekst źródłaHwang, Chien-Yao, i 黃建堯. "Implementation of Multi-Camera Cooperative Handover in a Surveillance System". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/43922425175410100102.
Pełny tekst źródła國立中正大學
電機工程所
97
Surveillance systems grow up rapidly in the security industry. A security surveillance system is used as a means of monitoring abnormal events. However, multiple cameras are always needed in a surveillance environment, such that it becomes impractical to obtain available human resources to operate an effective surveillance. Because a person can not focus on monitoring limited multi-screen at a time, we design a multi-camera cooperative handover method for a surveillance system to achieve the capability of seamless monitoring. The objective of the research is to develop a multi-camera cooperative handover method in a surveillance system with real-time tracking. The camera can track a moving target so as to keep it within camera view all the time. The control station can control the cooperation between multi-cameras, including harmonizing multi-camera based on spatial relationships. The research also proposes to an effective deployment method for cameras. The proposed system has been tested in the simulated situation, and experimental results demonstrate the affectivity of target tracking in the proposed system.
Hsieh, Chia-Chun, i 謝佳峻. "A Study on Ray-Space Interpolation for Multi-camera System". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/87259885057267163905.
Pełny tekst źródła國立交通大學
電子工程學系 電子研究所
102
We need to have complicated flew in the traditional 3D-model.for example, getting matching points, projecting in the 3D space, building cloud points, matching cloud points, meshing cloud points, and taking image to mesh, all step are complicated. What can be changed in this method ? In this thesis, we use a model,named Ray-space. It used the ray in the real world, and used the direction to build Ray-space. Everything in the real world had only one show points in the Ray-space. First, we found the matching points by two cameras. Second, we use the matching points to get the parameter in the Ray-space.Third, we use the epipolar plane image to let Ray-space completely.