Дисертації з теми "Real-time vision systems"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Real-time vision systems".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Benoit, Stephen M. "Monocular optical flow for real-time vision systems." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23862.
Повний текст джерелаTippetts, Beau J. "Real-Time Stereo Vision for Resource Limited Systems." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/2972.
Повний текст джерелаArshad, Norhashim Mohd. "Real-time data compression for machine vision measurement systems." Thesis, Liverpool John Moores University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285284.
Повний текст джерелаPan, Wenbo. "Real-time human face tracking." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ55535.pdf.
Повний текст джерелаNguyen, Dai-Duong. "A vision system based real-time SLAM applications." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS518/document.
Повний текст джерелаSLAM (Simultaneous Localization And Mapping) has an important role in several applications such as autonomous robots, smart vehicles, unmanned aerial vehicles (UAVs) and others. Nowadays, real-time vision based SLAM applications becomes a subject of widespread interests in many researches. One of the solutions to solve the computational complexity of image processing algorithms, dedicated to SLAM applications, is to perform high or/and low level processing on co-processors in order to build a System on Chip. Heterogeneous architectures have demonstrated their ability to become potential candidates for a system on chip in a hardware software co-design approach. The aim of this thesis is to propose a vision system implementing a SLAM algorithm on a heterogeneous architecture (CPU-GPU or CPU-FPGA). The study will allow verifying if these types of heterogeneous architectures are advantageous, what elementary functions and/or operators should be added on chip and how to integrate image-processing and the SLAM Kernel on a heterogeneous architecture (i. e. How to map the vision SLAM on a System on Chip).There are two parts in a visual SLAM system: Front-end (feature extraction, image processing) and Back-end (SLAM kernel). During this thesis, we studied several features detection and description algorithms for the Front-end part. We have developed our own algorithm denoted as HOOFR (Hessian ORB Overlapped FREAK) extractor which has a better compromise between precision and processing times compared to those of the state of the art. This algorithm is based on the modification of the ORB (Oriented FAST and rotated BRIEF) detector and the bio-inspired descriptor: FREAK (Fast Retina Keypoint). The improvements were validated using well-known real datasets. Consequently, we propose the HOOFR-SLAM Stereo algorithm for the Back-end part. This algorithm uses images acquired by a stereo camera to perform simultaneous localization and mapping. The HOOFR SLAM performances were evaluated on different datasets (KITTI, New-College , Malaga, MRT, St-Lucia, ...).Afterward, to reach a real-time system, we studied the algorithmic complexity of HOOFR SLAM as well as the current hardware architectures dedicated for embedded systems. We used a methodology based on the algorithm complexity and functional blocks partitioning. The processing time of each block is analyzed taking into account the constraints of the targeted architectures. We achieved an implementation of HOOFR SLAM on a massively parallel architecture based on CPU-GPU. The performances were evaluated on a powerful workstation and on architectures based embedded systems. In this study, we propose a system-level architecture and a design methodology to integrate a vision SLAM algorithm on a SoC. This system will highlight a compromise between versatility, parallelism, processing speed and localization results. A comparison related to conventional systems will be performed to evaluate the defined system architecture. In order to reduce the energy consumption, we have studied the implementation of the Front-end part (image processing) on an FPGA based SoC system. The SLAM kernel is intended to run on a CPU processor. We proposed a parallelized architecture using HLS (High-level synthesis) method and OpenCL language programming. We validated our architecture for an Altera Arria 10 SoC. A comparison with systems in the state-of-the-art showed that the designed architecture presents better performances and a compromise between power consumption and processing times
Garner, Harry Douglas Jr. "Development of a real-time vision based absolute orientation sensor." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/17022.
Повний текст джерелаGuo, Guanghao. "Evaluation of FPGA Partial Reconfiguration : for real-time Vision applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279957.
Повний текст джерелаAnvändningen av programmerbara logiska resurser i Field Programmable Gate Arrayer, även känd som FPGA:er, har ökat mycket nyligen på grund av komplexiteten hos algoritmerna, speciellt för vissa datorvisningsalgoritmer. På grund av detta är det ibland inte tillräckligt med hårdvaruresurser i FPGA:n. Partiell omkonfiguration ger oss möjlighet att lösa detta problem. Partiell omkonfigurering är en teknik som kan användas för att omkonfigurera specifika delar av FPGA:n under körtid. Genom att använda denna teknik kan vi minska behovet av programmerbara logiska resurser. Det här mastersprojektet syftar till att utforma ett programvaru-ramverk för partiell omkonfiguration som kan ladda en uppsättning processkomponenter / algoritmer (t.ex. objektdetektering, optiskt flöde, Harris-Corner detection etc) i FPGA- området utan att påverka statiska realtids-komponenter såsom kamerafångst, grundläggande bildfiltrering och färgkonvertering som körs kontinuerligt. Partiell omkonfiguration har tillämpats på två olika videoprocessnings-pipelines, en direkt-strömmande respektive en rambuffert-strömmande arkitektur. Resultatet visar att omkonfigurationstiden är förutsägbar och att partiell omkonfiguration kan användas i realtids-tillämpningar.
Hiromoto, Masayuki. "LSI design methodology for real-time computer vision on embedded systems." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/126476.
Повний текст джерела0048
新制・課程博士
博士(情報学)
甲第15012号
情博第371号
新制||情||68(附属図書館)
27462
UT51-2009-R736
京都大学大学院情報学研究科通信情報システム専攻
(主査)教授 佐藤 高史, 教授 小野寺 秀俊, 教授 松山 隆司, 准教授 越智 裕之
学位規則第4条第1項該当
Pereira, Pedro André Marques. "Measuring the strain of metallic surfaces in real time through vision systems." Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/16447.
Повний текст джерелаVision systems have already proven to be a useful tool in various elds. The ease of their implementation, allied to their low cost mean that their growth potential is immense. In this dissertation it is proposed a approach to measure strains in metallic surfaces, using stereo vision. This approach is based on the 3D DIC. This method measures the strain of the surface by dividing this surface in small sections, called subsets, and iteratively nding the equation that describes its shape variation through time. However, calculating the transformation of this subset is very timeconsuming. The proposed approach tries to optimize this calculation by rst determine the displacement eld, and then the strain eld by derivation. The dissertation also presents some experimental data and practical considerations relatively to the camera setup and image equalization algorithms in order to obtain better disparity maps. The results were veri ed experimentally and compared with the results obtained from other softwares.
Os sistemas de vis~ao j a provaram ser uma ferramenta util em v arios campos. A facilidade da sua implementa c~ao, aliada ao seu baixo custo signi cam que o seu potencial de crescimento e enorme. Nesta disserta c~ao e proposta uma abordagem para medir deforma c~oes em superf cies met alicas usando vis~ao stereo. Esta abordagem e baseada na t ecnica 3D DIC. Este m etodo mede as deforma c~oes da superf cie dividindo-a em pequenas se c~oes, designadas por sub- sets, tentando iterativamente encontrar a equa c~ao que de ne as varia c~oes das suas formas ao longo do tempo. No entanto, o c alculo das transforma c~oes destes subsets e demorado. A abordagem proposta pretende pretende otimizar este c alculo determinando primeiro o campo de deslocamentos e depois o campo das deforma c~oes atrav es da deriva c~ao. A disserta c~ao apresenta tamb em dados experimentais e considera c~oes pr aticas relativamente a con gura c~ao (setup) das c^amaras e algoritmos de equaliza c~ao de imagens de forma a se obterem melhores mapas de disparidade. Os resultados foram veri cados experimentalmente e comparados com os resultados obtidos por outros softwares.
Katramados, Ioannis. "Real-time object detection using monocular vision for low-cost automotive sensing systems." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/10386.
Повний текст джерелаBjörkman, Mårten. "Real-Time Motion and Stereo Cues for Active Visual Observers." Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3382.
Повний текст джерелаNelson, Eric D. "Zoom techniques for achieving scale invariant object tracking in real-time active vision systems /." Online version of the thesis, 2006. https://ritdml.rit.edu/dspace/handle/1850/2620.
Повний текст джерелаWatanabe, Yoko. "Stochastically optimized monocular vision-based navigation and guidance." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22545.
Повний текст джерелаCommittee Chair: Johnson, Eric; Committee Co-Chair: Calise, Anthony; Committee Member: Prasad, J.V.R.; Committee Member: Tannenbaum, Allen; Committee Member: Tsiotras, Panagiotis.
Hellsten, Jonas. "Evaluation of tone mapping operators for use in real time environments." Thesis, Linköping University, Department of Science and Technology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9749.
Повний текст джерелаAs real time visualizations become more realistic it also becomes more important to simulate the perceptual effects of the human visual system. Such effects include the response to varying illumination, glare and differences between photopic and scotopic vision. This thesis evaluates several different tone mapping methods to allow a greater dynamic range to be used in real time visualisations. Several tone mapping methods have been implemented in the Avalanche Game Engine and evaluated using a small test group. To increase immersion in the visualization several filters aimed to simulate perceptual effects has also been implemented. The primary goal of these filters is to simulate scotopic vision. The tests showed that two tone mapping methods would be suitable for the environment used in the tests. The S-curve tone mapping method gave the best result while the Mean Value method gave good results while being the simplest to implement and the cheapest. The test subjects agreed that the simulation of scotopic vision enhanced the immersion in a visualization. The primary difficulties in this work has been lack of dynamic range in the input images and the challenges in coding real time graphics using a graphics processing unit.
Entschev, Peter Andreas. "Efficient construction of multi-scale image pyramids for real-time embedded robot vision." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/720.
Повний текст джерелаInterest point detectors, or keypoint detectors, have been of great interest for embedded robot vision for a long time, especially those which provide robustness against geometrical variations, such as rotation, affine transformations and changes in scale. The detection of scale invariant features is normally done by constructing multi-scale image pyramids and performing an exhaustive search for extrema in the scale space, an approach that is present in object recognition methods such as SIFT and SURF. These methods are able to find very robust interest points with suitable properties for object recognition, but at the same time are computationally expensive. In this work we present an efficient method for the construction of SIFT-like image pyramids in embedded systems such as the BeagleBoard-xM. The method we present here aims at using computationally less expensive techniques and reusing already processed information in an efficient manner in order to reduce the overall computational complexity. To simplify the pyramid building process we use binomial filters instead of conventional Gaussian filters used in the original SIFT method to calculate multiple scales of an image. Binomial filters have the advantage of being able to be implemented by using fixed-point notation, which is a big advantage for many embedded systems that do not provide native floating-point support. We also reduce the amount of convolution operations needed by resampling already processed scales of the pyramid. After presenting our efficient pyramid construction method, we show how to implement it in an efficient manner in an SIMD (Single Instruction, Multiple Data) platform -- the SIMD platform we use is the ARM Neon extension available in the BeagleBoard-xM ARM Cortex-A8 processor. SIMD platforms in general are very useful for multimedia applications, where normally it is necessary to perform the same operation over several elements, such as pixels in images, enabling multiple data to be processed with a single instruction of the processor. However, the Neon extension in the Cortex-A8 processor does not support floating-point operations, so the whole method was carefully implemented to overcome this limitation. Finally, we provide some comparison results regarding the method we propose here and the original SIFT approach, including performance regarding execution time and repeatability of detected keypoints. With a straightforward implementation (without the use of the SIMD platform), we show that our method takes approximately 1/4 of the time taken to build the entire original SIFT pyramid, while repeating up to 86% of the interest points found with the original method. With a complete fixed-point approach (including vectorization within the SIMD platform) we show that repeatability reaches up to 92% of the original SIFT keypoints while reducing the processing time to less than 3%.
Cedernaes, Erasmus. "Runway detection in LWIR video : Real time image processing and presentation of sensor data." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-300690.
Повний текст джерелаForsthoefel, Dana. "Leap segmentation in mobile image and video analysis." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50285.
Повний текст джерелаSkoglund, Johan. "Robust Real-Time Estimation of Region Displacements in Video Sequences." Licentiate thesis, Linköping : Department of Electrical Engineering, Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8006.
Повний текст джерелаMahammad, Sarfaraz Ahmad, and Vendrapu Sushma. "Raspberry Pi Based Vision System for Foreign Object Debris (FOD) Detection." Thesis, Blekinge Tekniska Högskola, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20198.
Повний текст джерелаPettersson, Johan. "Real-time Object Recognition on a GPU." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10238.
Повний текст джерелаShape-Based matching (SBM) is a known method for 2D object recognition that is rather robust against illumination variations, noise, clutter and partial occlusion.
The objects to be recognized can be translated, rotated and scaled.
The translation of an object is determined by evaluating a similarity measure for all possible positions (similar to cross correlation).
The similarity measure is based on dot products between normalized gradient directions in edges.
Rotation and scale is determined by evaluating all possible combinations, spanning a huge search space.
A resolution pyramid is used to form a heuristic for the search that then gains real-time performance.
For SBM, a model consisting of normalized edge gradient directions, are constructed for all possible combinations of rotation and scale.
We have avoided this by using (bilinear) interpolation in the search gradient map, which greatly reduces the amount of storage required.
SBM is highly parallelizable by nature and with our suggested improvements it becomes much suited for running on a GPU.
This have been implemented and tested, and the results clearly outperform those of our reference CPU implementation (with magnitudes of hundreds).
It is also very scalable and easily benefits from future devices without effort.
An extensive evaluation material and tools for evaluating object recognition algorithms have been developed and the implementation is evaluated and compared to two commercial 2D object recognition solutions.
The results show that the method is very powerful when dealing with the distortions listed above and competes well with its opponents.
Launila, Andreas. "Real-Time Head Pose Estimation in Low-Resolution Football Footage." Thesis, KTH, Computer Vision and Active Perception, CVAP, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12061.
Повний текст джерелаThis report examines the problem of real-time head pose estimation in low-resolution football footage. A method is presented for inferring the head pose using a combination of footage and knowledge of the locations of the football and players. An ensemble of randomized ferns is compared with a support vector machine for processing the footage, while a support vector machine performs pattern recognition on the location data. Combining the two sources of information outperforms either in isolation. The location of the football turns out to be an important piece of information.
QC 20100707
Capturing and Visualizing Large scale Human Action (ACTVIS)
Schennings, Jacob. "Deep Convolutional Neural Networks for Real-Time Single Frame Monocular Depth Estimation." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-336923.
Повний текст джерелаSällqvist, Jessica. "Real-time 3D Semantic Segmentation of Timber Loads with Convolutional Neural Networks." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148862.
Повний текст джерелаMohan, Deepak. "Real-time detection of grip length deviation for fastening operations: a Mahalanobis-Taguchi system (MTS) based approach." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/pdf/DeepakMohanThesisFinal_09007dcc80410b1d.pdf.
Повний текст джерелаVita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed October 24, 2007) Includes bibliographical references.
Algers, Björn. "Stereo Camera Calibration Accuracy in Real-time Car Angles Estimation for Vision Driver Assistance and Autonomous Driving." Thesis, Umeå universitet, Institutionen för fysik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149443.
Повний текст джерелаBilsäkerhetsföretaget Veoneer är utvecklare av avancerade kamerasystem inom förarassistans, men kunskapen om den absoluta noggrannheten i deras dynamiska kalibreringsalgoritmer som skattar fordonets orientering är begränsad. I denna avhandling utvecklas och testas ett nytt mätsystem för att samla in referensdata av ett fordons orientering när det är i rörelse, mer specifikt dess pitchvinkel och rollvinkel. Fokus har legat på att skatta hur osäkerheten i mätsystemet påverkas av fel som introducerats vid dess konstruktion, samt att utreda dess potential när det kommer till att vara ett gångbart alternativ för att samla in referensdata för evaluering av prestandan hos algoritmerna. Systemet bestod av tre laseravståndssensorer monterade på fordonets kaross. En rad mätförsök utfördes med olika störningar introducerade genom att köra längs en vägsträcka i Linköping med vikter lastade i fordonet. Det insamlade referensdatat jämfördes med data från kamerasystemet där bias hos de framräknade vinklarna skattades, samt att de dynamiska egenskaperna kamerasystemets algoritmer utvärderades. Resultaten från mätförsöken visade på att noggrannheten i mätsystemet översteg 0.1 grader för både pitchvinklarna och rollvinklarna, men några slutsatser kring eventuell bias hos algoritmerna kunde ej dras då systematiska fel uppstått i mätresultaten.
Nilsson, Linus. "Quality and real-time performance assessment of color-correction methods : A comparison between histogram-based prefiltering and global color transfer." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33877.
Повний текст джерелаDing, Yuhua. "An integrated approach to real-time multisensory inspection with an application to food processing." Diss., Available online, Georgia Institute of Technology, 2003:, 2003. http://etd.gatech.edu/theses/available/etd-11242003-180728/unrestricted/dingyuhu200312.pdf.
Повний текст джерелаVachtsevanos, George J., Committee Chair; Dorrity, J. Lewis, Committee Member; Egerstedt, Magnus, Committee Member; Heck-Ferri, Bonnie S., Committee Co-Chair; Williams, Douglas B., Committee Member; Yezzi, Anthony J., Committee Member. Includes bibliography.
Alberts, Stefan Francois. "Real-time Software Hand Pose Recognition using Single View Depth Images." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86442.
Повний текст джерелаENGLISH ABSTRACT: The fairly recent introduction of low-cost depth sensors such as Microsoft’s Xbox Kinect has encouraged a large amount of research on the use of depth sensors for many common Computer Vision problems. Depth images are advantageous over normal colour images because of how easily objects in a scene can be segregated in real-time. Microsoft used the depth images from the Kinect to successfully separate multiple users and track various larger body joints, but has difficulty tracking smaller joints such as those of the fingers. This is a result of the low resolution and noisy nature of the depth images produced by the Kinect. The objective of this project is to use the depth images produced by the Kinect to remotely track the user’s hands and to recognise the static hand poses in real-time. Such a system would make it possible to control an electronic device from a distance without the use of a remote control. It can be used to control computer systems during computer aided presentations, translate sign language and to provide more hygienic control devices in clean rooms such as operating theatres and electronic laboratories. The proposed system uses the open-source OpenNI framework to retrieve the depth images from the Kinect and to track the user’s hands. Random Decision Forests are trained using computer generated depth images of various hand poses and used to classify the hand regions from a depth image. The region images are processed using a Mean-Shift based joint estimator to find the 3D joint coordinates. These coordinates are finally used to classify the static hand pose using a Support Vector Machine trained using the libSVM library. The system achieves a final accuracy of 95.61% when tested against synthetic data and 81.35% when tested against real world data.
AFRIKAANSE OPSOMMING: Die onlangse bekendstelling van lae-koste diepte sensors soos Microsoft se Xbox Kinect het groot belangstelling opgewek in navorsing oor die gebruik van die diepte sensors vir algemene Rekenaarvisie probleme. Diepte beelde maak dit baie eenvoudig om intyds verskillende voorwerpe in ’n toneel van mekaar te skei. Microsoft het diepte beelde van die Kinect gebruik om verskeie persone en hul ledemate suksesvol te volg. Dit kan egter nie kleiner ledemate soos die vingers volg nie as gevolg van die lae resolusie en voorkoms van geraas in die beelde. Die doel van hierdie projek is om die diepte beelde (verkry vanaf die Kinect) te gebruik om intyds ’n gebruiker se hande te volg oor ’n afstand en die statiese handgebare te herken. So ’n stelsel sal dit moontlik maak om elektroniese toestelle oor ’n afstand te kan beheer sonder die gebruik van ’n afstandsbeheerder. Dit kan gebruik word om rekenaarstelsels te beheer gedurende rekenaargesteunde aanbiedings, vir die vertaling van vingertaal en kan ook gebruik word as higiëniese, tasvrye beheer toestelle in skoonkamers soos operasieteaters en elektroniese laboratoriums. Die voorgestelde stelsel maak gebruik van die oopbron OpenNI raamwerk om die diepte beelde vanaf die Kinect te lees en die gebruiker se hande te volg. Lukrake Besluitnemingswoude ("Random Decision Forests") is opgelei met behulp van rekenaar gegenereerde diepte beelde van verskeie handgebare en word gebruik om die verskeie handdele vanaf ’n diepte beeld te klassifiseer. Die 3D koördinate van die hand ledemate word dan verkry deur gebruik te maak van ’n Gemiddelde-Afset gebaseerde ledemaat herkenner. Hierdie koördinate word dan gebruik om die statiese handgebaar te klassifiseer met behulp van ’n Steun-Vektor Masjien ("Support Vector Machine"), opgelei met behulp van die libSVM biblioteek. Die stelsel behaal ’n finale akkuraatheid van 95.61% wanneer dit getoets word teen sintetiese data en 81.35% wanneer getoets word teen werklike data.
Modi, Kalpesh Prakash. "Vision application of human robot interaction : development of a ping pong playing robotic arm /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/943.
Повний текст джерелаÄrleryd, Sebastian. "Realtime Virtual 3D Image of Kidney Using Pre-Operative CT Image for Geometry and Realtime US-Image for Tracking." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-234991.
Повний текст джерелаSomani, Nikhil [Verfasser], Alois C. [Akademischer Betreuer] Knoll, Torsten [Gutachter] Kröger, and Alois C. [Gutachter] Knoll. "Constraint-based Approaches for Robotic Systems: from Computer Vision to Real-Time Robot Control / Nikhil Somani ; Gutachter: Torsten Kröger, Alois C. Knoll ; Betreuer: Alois C. Knoll." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/1172414947/34.
Повний текст джерелаJulin, Fredrik. "Vision based facial emotion detection using deep convolutional neural networks." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-42622.
Повний текст джерелаMercado-Ravell, Diego Alberto. "Autonomous navigation and teleoperation of unmanned aerial vehicles using monocular vision." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2239/document.
Повний текст джерелаThe present document addresses, theoretically and experimentally, the most relevant topics for Unmanned Aerial Vehicles (UAVs) in autonomous and semi-autonomous navigation. According with the multidisciplinary nature of the studied problems, a wide range of techniques and theories are covered in the fields of robotics, automatic control, computer science, computer vision and embedded systems, among others. As part of this thesis, two different experimental platforms were developed in order to explore and evaluate various theories and techniques of interest for autonomous navigation. The first prototype is a quadrotor specially designed for outdoor applications and was fully developed in our lab. The second testbed is composed by a non expensive commercial quadrotor kind AR. Drone, wireless connected to a ground station equipped with the Robot Operating System (ROS), and specially intended to test computer vision algorithms and automatic control strategies in an easy, fast and safe way. In addition, this work provides a study of data fusion techniques looking to enhance the UAVs pose estimation provided by commonly used sensors. Two strategies are evaluated in particular, an Extended Kalman Filter (EKF) and a Particle Filter (PF). Both estimators are adapted for the system under consideration, taking into account noisy measurements of the UAV position, velocity and orientation. Simulations show the performance of the developed algorithms while adding noise from real GPS (Global Positioning System) measurements. Safe and accurate navigation for either autonomous trajectory tracking or haptic teleoperation of quadrotors is presented as well. A second order Sliding Mode (2-SM) control algorithm is used to track trajectories while avoiding frontal collisions in autonomous flight. The time-scale separation of the translational and rotational dynamics allows us to design position controllers by giving desired references in the roll and pitch angles, which is suitable for quadrotors equipped with an internal attitude controller. The 2-SM control allows adding robustness to the closed-loop system. A Lyapunov based analysis probes the system stability. Vision algorithms are employed to estimate the pose of the vehicle using only a monocular SLAM (Simultaneous Localization and Mapping) fused with inertial measurements. Distance to potential obstacles is detected and computed using the sparse depth map from the vision algorithm. For teleoperation tests, a haptic device is employed to feedback information to the pilot about possible collisions, by exerting opposite forces. The proposed strategies are successfully tested in real-time experiments, using a low-cost commercial quadrotor. Also, conception and development of a Micro Aerial Vehicle (MAV) able to safely interact with human users by following them autonomously, is achieved in the present work. Once a face is detected by means of a Haar cascade classifier, it is tracked applying a Kalman Filter (KF), and an estimation of the relative position with respect to the face is obtained at a high rate. A linear Proportional Derivative (PD) controller regulates the UAV’s position in order to keep a constant distance to the face, employing as well the extra available information from the embedded UAV’s sensors. Several experiments were carried out through different conditions, showing good performance even under disadvantageous scenarios like outdoor flight, being robust against illumination changes, wind perturbations, image noise and the presence of several faces on the same image. Finally, this thesis deals with the problem of implementing a safe and fast transportation system using an UAV kind quadrotor with a cable suspended load. The objective consists in transporting the load from one place to another, in a fast way and with minimum swing in the cable
Lo, Haw-Jing. "Real-time stereoscopic vision system." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/14911.
Повний текст джерелаChambers, Simon Paul. "TIPS : a transputer based real-time vision system." Thesis, University of Liverpool, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333629.
Повний текст джерелаClare, Anthony Joseph. "Real-time modelling and sensor fusion for a synthetic vision system." Thesis, University of Sheffield, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434515.
Повний текст джерелаChen, Sicheng. "A single-chip real-Time range finder." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/553.
Повний текст джерелаLu, Qiang. "A Real-Time System for Color Sorting Edge-Glued Panel Parts." Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/35881.
Повний текст джерелаMaster of Science
Gornall, Matthew James. "A Real-time Computer Vision System for tracking the face and hands /." Leeds : University of Leeds, School of Computer Studies, 2008. http://www.comp.leeds.ac.uk/fyproj/reports/0708/Gornall.pdf.
Повний текст джерелаRao, Niankun. "A novel high-speed stereo-vision system for real-time position sensing." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/39637.
Повний текст джерелаAlhamwi, Ali. "Co-design hardware/software of real time vision system on FPGA for obstacle detection." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30342/document.
Повний текст джерелаObstacle detection, localization and occupancy map reconstruction are essential abilities for a mobile robot to navigate in an environment. Solutions based on passive monocular vision such as simultaneous localization and mapping (SLAM) or optical flow (OF) require intensive computation. Systems based on these methods often rely on over-sized computation resources to meet real-time constraints. Inverse perspective mapping allows for obstacles detection at a low computational cost under the hypothesis of a flat ground observed during motion. It is thus possible to build an occupancy grid map by integrating obstacle detection over the course of the sensor. In this work we propose hardware/software system for obstacle detection, localization and 2D occupancy map reconstruction in real-time. The proposed system uses a FPGA-based design for vision and proprioceptive sensors for localization. Fusing this information allows for the construction of a simple environment model of the sensor surrounding. The resulting architecture is a low-cost, low-latency, high-throughput and low-power system
Lu, Siliang. "Dynamic HVAC Operations Based on Occupancy Patterns With Real-Time Vision- Based System." Research Showcase @ CMU, 2017. http://repository.cmu.edu/theses/132.
Повний текст джерелаMcCarthy, Cheryl. "Automatic non-destructive dimensional measurement of cotton plants in real-time by machine vision." University of Southern Queensland, Faculty of Engineering and Surveying, 2009. http://eprints.usq.edu.au/archive/00006228/.
Повний текст джерелаSchofield, Nicholas Roger. "A low cost, real time robot vision system with a cluster-based learning capacity." Thesis, Durham University, 1988. http://etheses.dur.ac.uk/947/.
Повний текст джерелаShen, Anqi. "A real time 3D surface measurement system using projected line patterns." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5399.
Повний текст джерелаSHRIS (Shanghai Ro-Intelligent System,co.,Ltd.)
Walker, Ryan Christopher Gareth. "Poi Poi Revolution: A real-time feedback training system for objectmanipulation." Thesis, University of Canterbury. Human Interface Technology Laboratory, 2013. http://hdl.handle.net/10092/8039.
Повний текст джерелаNaoulou, Abdelelah. "Architectures pour la stéréovision passive dense temps réel : application à la stéréo-endoscopie." Phd thesis, Université Paul Sabatier - Toulouse III, 2006. http://tel.archives-ouvertes.fr/tel-00110093.
Повний текст джерелаXing, Xiaoliang. "Etude et realisation d'un systeme temps reel de vision par ordinateur." Université Louis Pasteur (Strasbourg) (1971-2008), 1987. http://www.theses.fr/1987STR13078.
Повний текст джерелаCampbell, Jacob. "Characteristics of a real-time digital terrain database Integrity Monitor for a Synthetic Vision System." Ohio University / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1177441410.
Повний текст джерелаStraumann, Hugo M. "The development of a software package for low cost machine vision system for real time applications." Ohio : Ohio University, 1986. http://www.ohiolink.edu/etd/view.cgi?ohiou1183378665.
Повний текст джерела