Rozprawy doktorskie na temat „Visual Odometry”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Visual Odometry”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Pereira, Fabio Irigon. "High precision monocular visual odometry". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.
Pełny tekst źródłaRecovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
Masson, Clément. "Direction estimation using visual odometry". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.
Pełny tekst źródłaDetta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
Johansson, Fredrik. "Visual Stereo Odometry for Indoor Positioning". Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81215.
Pełny tekst źródłaVenturelli, Cavalheiro Guilherme. "Fusing visual odometry and depth completion". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122517.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Recent advances in technology indicate that autonomous vehicles and self-driving cats in particular may become commonplace in the near future. This thesis contributes to that scenario by studying the problem of depth perception based on sequences of camera images. We start by presenting a sensor fusion framework that achieves state-of-the-art performance when completing depth from sparse LiDAR measurements and a camera. Then, we study how the system performs under a variety of modifications of the sparse input until we ultimately replace LiDAR measurements with triangulations from a typical sparse visual odometry pipeline. We are then able to achieve a small improvement over the single image baseline and chart guidelines to assist in designing a system with even more substantial gains.
by Guilherme Venturelli Cavalheiro.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Burusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.
Pełny tekst źródłaMonokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach". University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.
Pełny tekst źródłaCampanholo, Guizilini Vitor. "Non-Parametric Learning for Monocular Visual Odometry". Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9903.
Pełny tekst źródłaWuthrich, Tori(Tori Lee). "Learning visual odometry primitives for computationally constrained platforms". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122419.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 51-52).
Autonomous navigation for robotic platforms, particularly techniques that leverage an onboard camera, are of currently of significant interest to the robotics community. Designing methods to localize small, resource-constrained robots is a particular challenge due to limited availability of computing power and physical space for sensors. A computer vision, machine learning-based localization method was proposed by researchers investigating the automation of medical procedures. However, we believed the method to also be promising for low size, weight, and power (SWAP) budget robots. Unlike for traditional odometry methods, in this case, a machine learning model can be trained offline, and can then generate odometry measurements quickly and efficiently. This thesis describes the implementation of the learning-based, visual odometry method in the context of autonomous drones. We refer to the method as RetiNav due to its similarities with the way the human eye processes light signals from its surroundings. We make several modifications to the method relative to the initial design based on a detailed parameter study, and we test the method on a variety of challenging flight datasets. We show that over the course of a trajectory, RetiNav achieves as low as 1.4% error in predicting the distance traveled. We conclude that such a method is a viable component of a localization system, and propose the next steps for work in this area.
by Tori Wuthrich.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Greenberg, Jacob. "Visual Odometry for Autonomous MAV with On-Board Processing". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177290.
Pełny tekst źródłaEn ny visuell registreringsalgoritm (Adaptive Iterative Closest Keypoint, AICK) testas och utvärderas som ett positioneringsverktyg på en Micro Aerial Vehicle (MAV). Tagna bilder från en Kinect liknande RGB-D kamera analyseras och en approximerad position av MAVen beräknas. Förhoppningen är att hitta en positioneringslösning för miljöer utan GPS förbindelse, där detta arbete fokuserar på kontorsmiljöer inomhus. MAVen flygs manuellt samtidigt som RGB-D bilder tas, dessa registreras sedan med hjälp av AICK. Resultatet analyseras för att kunna dra en slutsats om AICK är en rimlig metod eller inte för att åstadkomma autonom flygning med hjälp av den uppskattade positionen. Resultatet visar potentialen för en fungerande autonom MAV i miljöer utan GPS förbindelse, men det finns testade miljöer där AICK i dagsläget fungerar undermåligt. Bristen på visuella särdrag på t.ex. en vit vägg inför problem och osäkerheter i positioneringen, ännu mer besvärande är det när avståndet till omgivningen överskrider RGB-D kamerornas räckvidd. Med fortsatt arbete med dessa svagheter är en robust autonom MAV som använder AICK för positioneringen rimlig.
Clark, Ronald. "Visual-inertial odometry, mapping and re-localization through learning". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:69b03c50-f315-42f8-ad41-d97cd4c9bf09.
Pełny tekst źródłaMyriokefalitakis, Panteleimon. "Real-time conversion of monodepth visual odometry enhanced network". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288488.
Pełny tekst źródłaDetta examensarbete tillhör området för självkontrollerad monokulär djupbedömning och utgör en omvandling av det arbete som gjorts under [1]. Syftet är att överväga den beräkningsmässiga dyra modellen i [1] som basmodellen för detta arbete och försöka skapa en lätt modell ur den. Det nuvarande arbetet förutsätter ett nätverk som är lämpligt att distribueras på inbäddade enheter som NVIDIA Jetson TX2 där behoven för kort driftstid, liten minnesfotavtryck och kraftförbrukning är viktigast. Med andra ord, om dessa krav saknas, oavsett om precisionen är extra hög, kan modellen inte fungera på inbäddade processorer. Således kan mobilplattformar med små storlekar som drönare, leveransrobotar, etc. inte utnyttja fördelarna med djupbildning. Det föreslagna nätverket har _29,7 mindre parametrar än baselinemodellen [1] och använder endast 10,6MB för ett framåtpass i motsats till 227MB som används av nätverket i [1]. Följaktligen kan den föreslagna modellen fungera på inbäddade enheters GPU. Slutligen kan den dra slutsatsen med lovande hastighet på standard CPUs och samtidigt ger jämförbar eller högre noggrannhet än andra arbete.
Chermak, Lounis. "Standalone and embedded stereo visual odometry based navigation solution". Thesis, Cranfield University, 2015. http://dspace.lib.cranfield.ac.uk/handle/1826/9319.
Pełny tekst źródłaGui, Jianjun. "Direct visual and inertial odometry for monocular mobile platforms". Thesis, University of Essex, 2018. http://repository.essex.ac.uk/21726/.
Pełny tekst źródłaWarren, Michael David. "Long-range stereo visual odometry for unmanned aerial vehicles". Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/80107/1/Michael_Warren_Thesis.pdf.
Pełny tekst źródłaKhairallah, Mahmoud. "Flow-Based Visual-Inertial Odometry for Neuromorphic Vision Sensors". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST117.
Pełny tekst źródłaRather than generating images constantly and synchronously, neuromorphic vision sensors -also known as event-based cameras- permit each pixel to provide information independently and asynchronously whenever brightness change is detected. Consequently, neuromorphic vision sensors do not encounter the problems of conventional frame-based cameras like image artifacts and motion blur. Furthermore, they can provide lossless data compression, higher temporal resolution and higher dynamic range. Hence, event-based cameras conveniently replace frame-based cameras in robotic applications requiring high maneuverability and varying environmental conditions. In this thesis, we address the problem of visual-inertial odometry using event-based cameras and an inertial measurement unit. Exploiting the consistency of event-based cameras with the brightness constancy conditions, we discuss the availability of building a visual odometry system based on optical flow estimation. We develop our approach based on the assumption that event-based cameras provide edge-like information about the objects in the scene and apply a line detection algorithm for data reduction. Line tracking allows us to gain more time for computations and provides a better representation of the environment than feature points. In this thesis, we do not only show an approach for event-based visual-inertial odometry but also event-based algorithms that can be used as stand-alone algorithms or integrated into other approaches if needed
Frey, Kristoffer M. (Kristoffer Martin). "Sparsity and computation reduction for high-rate visual-inertial odometry". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113745.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 147-151).
The navigation problem for mobile robots operating in unknown environments can be posed as a subset of Simultaneous Localization and Mapping (SLAM). For computationally-constrained systems, maintaining and promoting system sparsity is key to achieving the high-rate solutions required for agile trajectory tracking. This thesis focuses on the computation involved in the elimination step of optimization, showing it to be a function of the corresponding graph structure. This observation directly motivates the search for measurement selection techniques to promote sparse structure and reduce computation. While many sophisticated selection techniques exist in the literature, relatively little attention has been paid to the simple yet ubiquitous heuristic of decimation. This thesis shows that decimation produces graphs with an inherently sparse, partitioned super-structure. Furthermore, it is shown analytically for single-landmark graphs that the even spacing of observations characteristic of decimation is near optimal in a weighted number of spanning trees sense. Recent results in the SLAM community suggest that maximizing this connectivity metric corresponds to good information-theoretic performance. Simulation results confirm that decimation-style strategies perform as well or better than sophisticated policies which require significant computation to execute. Given that decimation consumes negligible computation to evaluate, its performance demonstrated here makes decimation a formidable measurement selection strategy for high-rate, realtime SLAM solutions. Finally, the SAMWISE visual-inertial estimator is described, and thorough experimental results demonstrate its robustness in a variety of scenarios, particularly to the challenges prescribed by the DARPA Fast Lightweight Autonomy program.
This thesis was supported by the Defense Advanced Research Projects Agency (DARPA) under the Fast Lightweight Autonomy program.
by Kristoffer M. Frey.
S.M.
Verpers, Felix. "Improving a stereo-based visual odometry prototype with global optimization". Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-383268.
Pełny tekst źródłaPereira, Ana Rita. "Visual odometry: comparing a stereo and a multi-camera approach". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-11092017-095254/.
Pełny tekst źródłaO objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
Aksjonova, Jevgenija. "LDD: Learned Detector and Descriptor of Points for Visual Odometry". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233571.
Pełny tekst źródłaSamtidig lokalisering och kartläggning är ett viktigt problem inom robotik som kan lösas med hjälp av visuell odometri -- processen att uppskatta självrörelse från efterföljande kamerabilder. Visuella odometrisystem förlitar sig i sin tur på punktmatchningar mellan olika bildrutor. Detta arbete presenterar en ny metod för matchning av nyckelpunkter genom att applicera neurala nätverk för detektion av punkter och deskriptorer. Traditionellt sett används punktdetektorer för att välja ut bra nyckelpunkter (som hörn) och sedan används dessa nyckelpunkter för att matcha särdrag. I detta arbete tränas istället en deskriptor att matcha punkterna. Sedan tränas en detektor till att förutspå vilka punker som är mest troliga att matchas korrekt med deskriptorn. Denna information används sedan för att välja ut bra nyckelpunkter. Resultatet av projektet visar att det kan leda till mer precisa resultat jämfört med andra modellbaserade metoder.
Awang, Salleh Dayang Nur Salmi Dharmiza. "Study of vehicle localization optimization with visual odometry trajectory tracking". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS601.
Pełny tekst źródłaWith the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis
Nishitani, André Toshio Nogueira. "Localização baseada em odometria visual". Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17082016-095838/.
Pełny tekst źródłaThe localization problem consists of estimating the position of the robot with regards to some external reference and it is an essential part of robots and autonomous vehicles navigation systems. Localization based on visual odometry, compared to encoder based odometry, stands out at the estimation of rotation and direction of the movement. This kind of approach is an interesting choice for vehicle control systems in urban environment, where the visual information is mandatory for the extraction of semantic information contained in the street signs and marks. In this context this project propose the development of a visual odometry system based on structure from motion using visual information acquired from a monocular camera to estimate the vehicle pose. The absolute scale problem, inherent with the use of monocular cameras, is achieved using som previous known information regarding the metric relation between image points and points lying on a same world plane.
Chiodini, Sebastiano. "Visual odometry and vision system measurements based algorithms for rover navigation". Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3425347.
Pełny tekst źródłaI rover marziani e, più in generale, i robot per l’esplorazione di asteroidi e piccoli corpi celesti, richiedono un alto livello di autonomia. Il controllo da parte di un operatore deve essere ridotto al minimo, al fine di ridurre i tempi di percorrenza, ottimizzare le risorse allocate per le tele-comunicazioni e massimizzare l’output scientifico della missione. Conoscendo la posizione obiettivo e considerando la dinamica del veicolo, gli algoritmi di controllo forniscono gli input adeguati agli attuatori. Algoritmi di pianificazione della traiettoria, sfruttando modelli tridimensionali del terreno circostante, evitano gli ostacoli con ampi margini di sicurezza. Inoltre i rover per le missioni di sample and return, previste per i prossimi anni, devono dimostrare la capacità di tornare in un luogo già visitato per il campionamento di dati scientifici o per riportare i campioni raccolti ad un veicolo di risalita. In tutte queste task la stima del moto risulta essere fondamentale. La stima del moto su altri pianeti ha la sua peculiarità. L’odometria tramite encoder, infatti, presenta elevate incertezze a causa dello slittamento delle ruote su superfici sabbiose o scivolose; i sistemi di navigazione inerziale, nel caso della dinamica lenta dei rover, presentano derive non tollerabili per una stima accurata dell’assetto; infine non sono disponibili sistemi di posizionamento globale analoghi al GPS. Sistemi della stima del moto basati su telecamere hanno dimostrato, già con le missioni MER della NASA, di essere affidabili e accurati. Uno di questi sistemi è l’odometria visuale stereo. In questo algoritmo il moto è stimato calcolando la roto-traslazione di due nuvole di punti misurate a due istanti successivi. La nuvola di punti è generata tramite triangolazione di punti salienti presenti nelle due immagini. Grazie a tecniche di Simultaneous Localization and Mapping (SLAM) si dà la capacità ad un rover di costruire una mappa dell’ambiente circostante e di localizzarsi rispetto ad essa. Le tecniche di SLAM presentano due vantaggi: la costruzione della mappa e una stima della traiettoria più accurata, grazie alla soluzione di problemi di minimizzazione che coinvolgono la stima di più posizioni e landmark allo stesso tempo. Subito dopo l’atterraggio, una delle task principali che devono essere svolte dal centro operativo per il controllo di rover è il calcolo accurato della posizione del lander/rover rispetto al sisma di riferimento inerziale e il sistema di riferimento solidale al pianeta, come il sistema J2000 e il Mars Body-Fixed (MBF) frame. Sia per le operazioni scientifiche che ingegneristiche risulta fondamentale la localizzazione accurata rispetto a immagini satellitari e a modelli tridimensionali della zona di atterraggio. Nella prima parte della tesi viene trattato il problema della localizzazione di un rover rispetto ad un’immagine satellitare geo referenziata e orto rettificata e la localizzazione rispetto ad un modello di elevazione digitale (DEM), realizzato da immagini satellitari. È stata svolta l’analisi di una versione modificata dell’algoritmo Visual Position Estimator for Rover (VIPER). L’algoritmo trova la posizione e l’assetto di un rover rispetto ad un DEM, comparando la linea d’orizzonte locale con le linee d’orizzonte calcolate in posizioni a priori del DEM. Queste analisi sono state svolte in collaborazione con ALTEC S.p.A., con lo scopo di definire le operazioni che il Rover Operation Control Center (ROCC) dovrà svolgere per la localizzazione del rover ExoMars 2020. Una volta effettuate le operazioni di localizzazione, questi metodi possono essere nuovamente utilizzati come verifica e correzione della stima della traiettoria. Nella seconda parte della dissertazione è presentato un metodo di odometria visuale stereo per rover ed un’analisi di come la distribuzione dei landmark triangolati influisca sulla stima del moto. A questo scopo sono stati svolti dei test in laboratorio, variando la distanza della scena. L’algoritmo di odometria visiva implementato è un metodo 3D-to-3D con rimozione dei falsi positivi tramite procedura di RANdom SAmple Consensus. La stima del moto è effettuata minimizzando la distanza euclidea tra le due nuvole di punti. L’ultima parte di questa dissertazione è stata sviluppata in collaborazione con il Jet Propulsion Laboratory (NASA) e presenta un sistema di localizzazione per rover hopping/tumbling per l’esplorazione di comete e asteroidi. Tali sistemi innovativi richiedono nuovi approcci per la localizzazione. Viste le risorse limitate di spazio, peso e energia disponibile e le limitate capacità computazionali, si è scelto di basare il sistema di localizzazione su una monocamera. La localizzazione visuale in prossimità di una cometa, inoltre, presenta alcune peculiarità che la rendono più difficoltosa. Questo a causa dei grandi cambiamenti di scala che si presentano durante il movimento della piattaforma, le frequenti occlusioni del campo di vista, la presenza di ombre nette che cambiano con il periodo di rotazione dell’asteroide e la caratteristica visiva del terreno, che risulta essere omogeno nel campo del visibile. È stato proposto un sistema di visual SLAM collaborativo tra il rover tumbling/hopping e il satellite “madre”, che ha portato il rover nell’orbita di rilascio. È stato effettuato lo stato dell’arte dei più recenti algoritmi di visual SLAM open-source e, dopo un’accurata analisi, si è optato per l’utilizzo di ORB-SLAM2, che è stato modificato per far fronte al tipo di applicazione richiesta. È stata introdotta la possibilità di salvare la mappa realizzata dall’orbiter, che viene utilizzata dal rover per la sua localizzazione. È possibile, inoltre, fondere la mappa realizzata da orbiter con altre misure d’assetto provenienti da altri sensori a bordo dell’orbiter. L’accuratezza di tale metodo è stata valutata utilizzando una sequenza di immagini raccolta in ambiente rappresentativo e utilizzando un sistema di riferimento esterno. Sono state effettuate simulazioni della fase di mappatura dell’asteroide e localizzazione della piattaforma hopping/tumbling e, infine, è stato valutato come migliorare le performances di questo metodo, in seguito al cambiamento delle condizioni di illuminazione.
Terzakis, George. "Visual odometry and mapping in natural environments for arbitrary camera motion models". Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/6686.
Pełny tekst źródłaNASCIMENTO, MARCELO DE MATTOS. "USING DENSE 3D RECONSTRUCTION FOR VISUAL ODOMETRY BASED ON STRUCTURE FROM MOTION TECHNIQUES". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26102@1.
Pełny tekst źródłaAlvo de intenso estudo da visão computacional, a reconstrução densa 3D teve um importante marco com os primeiros sistemas em tempo real a alcançarem precisão milimétrica com uso de câmeras RGBD e GPUs. Entretanto estes métodos não são aplicáveis a dispositivos de menor poder computacional. Tendo a limitação de recursos computacionais como requisito, o objetivo deste trabalho é apresentar um método de odometria visual utilizando câmeras comuns e sem a necessidade de GPU, baseado em técnicas de Structure from Motion (SFM) com features esparsos, utilizando as informações de uma reconstrução densa. A Odometria visual é o processo de estimar a orientação e posição de um agente (um robô, por exemplo), a partir das imagens. Esta dissertação fornece uma comparação entre a precisão da odometria calculada pelo método proposto e pela reconstrução densa utilizando o Kinect Fusion. O resultado desta pesquisa é diretamente aplicável na área de realidade aumentada, tanto pelas informações da odometria que podem ser usadas para definir a posição de uma câmera, como pela reconstrução densa, que pode tratar aspectos como oclusão dos objetos virtuais com reais.
Aim of intense research in the field computational vision, dense 3D reconstruction achieves an important landmark with first methods running in real time with millimetric precision, using RGBD cameras and GPUs. However these methods are not suitable for low computational resources. Having low computational resources as requirement, the goal of this work is to show a method of visual odometry using regular cameras, without using a GPU. The proposed method is based on technics of sparse Structure From Motion (SFM), using data provided by dense 3D reconstruction. Visual odometry is the process of estimating the position and orientation of an agent (a robot, for instance), based on images. This dissertation compares the proposed method with the odometry calculated by Kinect Fusion. Results of this research are applicable in augmented reality. Odometry provided by this work can be used to model a camera and the data from dense 3D reconstruction, can be used to handle occlusion between virtual and real objects.
Galfond, Marissa N. (Marissa Nicole). "Visual-inertial odometry with depth sensing using a multi-state constraint Kalman filter". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/97361.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 93-97).
The goal of visual inertial odometry (VIO) is to estimate a moving vehicle's trajectory using inertial measurements and observations, obtained by a camera, of naturally occurring point features. One existing VIO estimation algorithm for use with a monocular system, is the multi-state constraint Kalman filter (MSCKF), proposed by Mourikis and Li [34, 29]. The way the MSCKF uses feature measurements drastically improves its performance, in terms of consistency, observability, computational complexity and accuracy, compared to other VIO algorithms [29]. For this reason, the MSCKF is chosen as the basis for the estimation algorithm presented in this thesis. A VIO estimation algorithm for a system consisting of an IMU, a monocular camera and a depth sensor is presented in this thesis. The addition of the depth sensor to the monocular camera system produces three-dimensional feature locations rather than two-dimensional locations. Therefore, the MSCKF algorithm is extended to use the extra information. This is accomplished using a model proposed by Dryanovski et al. that estimates the 3D location and uncertainty of each feature observation by approximating it as a multivariate Gaussian distribution [11]. The extended MSCKF algorithm is presented and its performance is compared to the original MSCKF algorithm using real-world data obtained by flying a custom-built quadrotor in an indoor office environment.
by Marissa N. Galfond.
S.M.
Soliman, Abanob. "Visual Odometry Using Heterogeneous Cameras for Simultaneous Localization and Mapping for Autonomous Vehicles". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST119.
Pełny tekst źródłaThis Ph.D. thesis addresses the challenges of sensor fusion and Simultaneous Localization And Mapping (SLAM) for autonomous systems, specifically focusing on Autonomous Ground Vehicles (AGVs) and Micro Aerial Vehicles (MAVs) navigating large-scale and dynamic environments. The thesis presents a range of innovative solutions to enhance the performance and reliability of SLAM systems through five methodological chapters.The introductory chapter establishes the research motivation, highlighting the challenges and limitations of visual odometry using heterogeneous cameras. It also outlines the thesis structure and extensively reviews relevant literature. The second chapter introduces IBISCape, a simulated benchmark for validating high-fidelity SLAM systems based on the CARLA simulator. The third chapter presents a novel optimization-based method for calibrating an RGB-D-IMU visual-inertial setup, validated through extensive experiments on real-world and simulated sequences. The fourth chapter proposes a linear optimal state estimation approach for MAVs to achieve high-accuracy localization with minimal system delay.The fifth chapter introduces the DH-PTAM system for robust parallel tracking and mapping in dynamic environments using stereo images and event streams. The sixth chapter explores new frontiers in the field of dense SLAM using Event cameras, presenting a novel end-to-end approach for hybrid events and point clouds dense SLAM system. The seventh and final chapter summarizes the thesis's contributions and main findings, emphasizing the advancements made in multi-modal heterogeneous sensor fusion for autonomous systems navigating large-scale and dynamic environments. Future work includes investigating the potential of integrating inertial navigation sensors and exploring additional deep-learning components for improving loop-closure robustness and accuracy
Silva, Bruno Marques Ferreira da. "Odometria visual baseada em t?cnicas de structure from motion". Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15364.
Pełny tekst źródłaCoordena??o de Aperfei?oamento de Pessoal de N?vel Superior
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Odometria Visual ? o processo pelo qual consegue-se obter a posi??o e orienta??o de uma c?mera, baseado somente em imagens e consequentemente, em caracter?sticas (proje??es de marcos visuais da cena) nelas contidas. Com o avan?o nos algoritmos e no poder de processamento dos computadores, a sub?rea de Vis?o Computacional denominada de Structure from Motion (SFM) passou a fornecer ferramentas que comp?em sistemas de localiza??o visando aplica??es como rob?tica e Realidade Aumentada, em contraste com o seu prop?sito inicial de ser usada em aplica??es predominantemente offline como reconstru??o 3D e modelagem baseada em imagens. Sendo assim, este trabalho prop?e um pipeline de obten??o de posi??o relativa que tem como caracter?sticas fazer uso de uma ?nica c?mera calibrada como sensor posicional e ser baseado interamente nos modelos e algoritmos de SFM. T?cnicas usualmente presentes em sistemas de localiza??o de c?mera como filtros de Kalman e filtros de part?culas n?o s?o empregadas, dispensando que informa??es adicionais como um modelo probabil?stico de transi??o de estados para a c?mera sejam necess?rias. Experimentos foram realizados com o prop?sito de avaliar tanto a reconstru??o 3D quanto a posi??o de c?mera retornada pelo sistema, atrav?s de sequ?ncias de imagens capturadas em ambientes reais de opera??o e compara??es com um ground truth fornecido pelos dados do od?metro de uma plataforma rob?tica
CHEN, HONGYI. "GPS-oscillation-robust Localization and Visionaided Odometry Estimation". Thesis, KTH, Maskinkonstruktion (Inst.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247299.
Pełny tekst źródłaGPS/IMU integrerade system används ofta för navigering av fordon. Algoritmen för detta kopplade system är normalt baserat på ett Kalmanfilter. Ett problem med systemet är att oscillerade GPS mätningar i stadsmiljöer enkelt kan leda till en lokaliseringsdivergens. Dessutom kan riktningsuppskattningen vara känslig för magnetiska störningar om den är beroende av en IMU med integrerad magnetometer. Rapporten försöker lösa lokaliseringsproblemet som skapas av GPS-oscillationer och avbrott med hjälp av ett adaptivt förlängt Kalmanfilter (AEKF). När det gäller riktningsuppskattningen används stereovisuell odometri (VO) för att försvaga effekten av magnetiska störningar genom sensorfusion. En Visionsstödd AEKF-baserad algoritm testas i fall med både goda GPS omständigheter och med oscillationer i GPS mätningar med magnetiska störningar. Under de fallen som är aktuella är algoritmen verifierad för att överträffa det konventionella utökade Kalmanfilteret (CEKF) och ”Unscented Kalman filter” (UKF) när det kommer till positionsuppskattning med 53,74% respektive 40,09% samt minska fel i riktningsuppskattningen.
Silva, Ricardo Luís da Mota. "Removable odometry unit for vehicles with Ackermann steering". Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/13699.
Pełny tekst źródłaO principal objetivo deste trabalho é o desenvolvimento de uma solução de hodometria para veículos com direção Ackermann. A solução tinha que ser portátil, exível e fácil de montar. Após o estudo do estado da arte e uma pesquisa de soluções, a solução escolhida foi baseada em hodometria visual. Os passos seguintes do trabalho foram estudar a viabilidade de utilizar câmaras lineares para hodometria visual. O sensor de imagem foi usado para calcular a velocidade longitudinal; e a orientação da movimento foi calculado usando dois giroscópios. Para testar o método, várias experiências foram feitas; as experiências ocorreram indoor, sob condições controladas. Foi testada a capacidade de medir a velocidade em movimentos de linha reta, movimentos diagonais, movimentos circulares e movimentos com variação da distância ao solo. Os dados foram processados usando algoritmos de correlação e os foram resultados documentados. Com base nos resultados, é seguro concluir que hodometria com câmaras lineares auxiliado por sensores inerciais tem um potencial de aplicabilidade no mundo real.
The main objective of this work is to develop a solution of odometry for vehicles with Ackermann steering. The solution had to be portable, exible and easy to mount. After the study of the state of the art and a survey of solutions, the solution chosen was based on visual odometry. The following steps of the work were to study the feasibility to use line scan image sensors for visual odometry. The image sensor was used to compute the longitudinal velocity; and the orientation of motion was computed using two gyroscopes. To test the method, several experiments were made; the experiments took place indoor, under controlled conditions. It was tested the ability to measure velocity on straight line movements, diagonal movements, circular movements and movements with a changing distance from the ground. The data was processed with correlation algorithms and the results were documented. Based on the results it is safe to conclude that odometry with line scan sensors aided by inertial sensors has a potential for a real world applicability.
Voisin-Denoual, Maxime. "Monocular Visual Odometry for Underwater Navigation : An examination of the performance of two methods". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229907.
Pełny tekst źródłaDenna uppsats undersöker två metoder för monokulär visuell odometri, FAST + KLT och ORBSLAM2, i det särskilda fallet av miljöer under vatten. Detta görs genom att implementera och testa metoderna på olika undervattensdataset. Resultaten för FAST + KLT ger inget stöd för att metoden skulle vara effektiv i undervattensmiljöer. Resultaten för ORBSLAM2, däremot, indikerar att denna metod kan prestera bra om den justeras på rätt sätt och får bra kamerakalibrering. Samtidigt återstår dock utmaningar relaterade till exempelvis miljöer med sandbottnar och uppskattning av skala i monokulära setups. Slutsatsen är därför att ORBSLAM2 är den mest lovande metoden av de två testade för monokulär visuell odometri under vatten.
Lee, Hong Yun. "Deep Learning for Visual-Inertial Odometry: Estimation of Monocular Camera Ego-Motion and its Uncertainty". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu156331321922759.
Pełny tekst źródłaVoges, Raphael [Verfasser]. "Bounded-error visual-LiDAR odometry on mobile robots under consideration of spatiotemporal uncertainties / Raphael Voges". Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/1214367119/34.
Pełny tekst źródłaGreyvensteyn, Ian. "Evaluating the effect of illumination on the performance of visual odometry in underground mining environments". Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/204294/1/Ian%20Greyvensteyn%20Thesis.pdf.
Pełny tekst źródłaPersson, Mikael. "Online Monocular SLAM : Rittums". Thesis, Linköpings universitet, Datorseende, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112779.
Pełny tekst źródłaSchneider, Johannes [Verfasser]. "Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System / Johannes Schneider". Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1190818558/34.
Pełny tekst źródłaSchmid, Stephan [Verfasser], i Dieter [Akademischer Betreuer] Fritsch. "Semi-dense filter-based visual odometry for automotive augmented reality applications / Stephan Schmid ; Betreuer: Dieter Fritsch". Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2019. http://d-nb.info/1194373070/34.
Pełny tekst źródłaSchneider, Johannes [Verfasser]. "Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System / Johannes Schneider". Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1217404635/34.
Pełny tekst źródłaAy, Emre. "Ego-Motion Estimation of Drones". Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210772.
Pełny tekst źródłaFör att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.
Santos, Cristiano Flores dos. "Um framework para avaliação de mapeamento tridimensional Utilizando técnicas de estereoscopia e odometria visual". Universidade Federal de Santa Maria, 2016. http://repositorio.ufsm.br/handle/1/12038.
Pełny tekst źródłaO mapeamento tridimensional de ambientes tem sido intensivamente estudado na última década. Entre os benefícios deste tema de pesquisa é possível destacar adição de autonomia á automóveis ou mesmo drones. A representação tridimensional também permite a visualização de um dado cenário de modo iterativo e com maior riqueza de detalhes. No entanto, até o momento da elaboração deste trabalho não foi encontrado um framework que apresente em detalhes a implementação de algoritmos para realização do mapeamento 3D de ambientes externos que se aproximasse de um processamento em tempo real. Diante disto, neste trabalho foi desenvolvido um framework com as principais etapas de reconstrução tridimensional. Para tanto, a estereoscopia foi escolhida como técnica para a aquisição da informação de profundidade do cenário. Além disto, neste trabalho foram avaliados 4 algoritmos de geração do mapa de profundidade, onde foi possível atingir a taxa de 9 quadros por segundo.
Szente, Michal. "Vizuální odometrie pro robotické vozidlo Car4". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-317205.
Pełny tekst źródłaWisely, Babu Benzun. "Motion Conflict Detection and Resolution in Visual-Inertial Localization Algorithm". Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/503.
Pełny tekst źródłaLigocki, Adam. "Metody současné sebelokalizace a mapování pro hloubkové kamery". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316270.
Pełny tekst źródłaCoppejans, Hugo Herman Godelieve. "RGB-D SLAM : an implementation framework based on the joint evaluation of spatial velocities". Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/64524.
Pełny tekst źródłaDissertation (MEng)--University of Pretoria, 2017.
Electrical, Electronic and Computer Engineering
MEng
Unrestricted
Santos, Vinícius Araújo. "SiameseVO-Depth: odometria visual através de redes neurais convolucionais siamesas". Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/9083.
Pełny tekst źródłaApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-21T11:06:26Z (GMT) No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-11-21T11:06:26Z (GMT). No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-10-11
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Visual Odometry is an important process in image based navigation of robots. The standard methods of this field rely on the good feature matching between frames where feature detection on images stands as a well adressed problem within Computer Vision. Such techniques are subject to illumination problems, noise and poor feature localization accuracy. Thus, 3D information on a scene may mitigate the uncertainty of the features on images. Deep Learning techniques show great results when dealing with common difficulties of VO such as low illumination conditions and bad feature selection. While Visual Odometry and Deep Learning have been connected previously, no techniques applying Siamese Convolutional Networks on depth infomation given by disparity maps have been acknowledged as far as this work’s researches went. This work aims to fill this gap by applying Deep Learning to estimate egomotion through disparity maps on an Siamese architeture. The SiameseVO-Depth architeture is compared to state of the art techniques on OV by using the KITTI Vision Benchmark Suite. The results reveal that the chosen methodology succeeded on the estimation of Visual Odometry although it doesn’t outperform the state-of-the-art techniques. This work presents fewer steps in relation to standard VO techniques for it consists of an end-to-end solution and demonstrates a new approach of Deep Learning applied to Visual Odometry.
Odometria Visual é um importante processo na navegação de robôs baseada em imagens. Os métodos clássicos deste tema dependem de boas correspondências de características feitas entre imagens sendo que a detecção de características em imagens é um tema amplamente discutido no campo de Visão Computacional. Estas técnicas estão sujeitas a problemas de iluminação, presença de ruído e baixa de acurácia de localização. Nesse contexto, a informação tridimensional de uma cena pode ser uma forma de mitigar as incertezas sobre as características em imagens. Técnicas de Deep Learning têm demonstrado bons resultados lidando com problemas comuns em técnicas de OV como insuficiente iluminação e erros na seleção de características. Ainda que já existam trabalhos que relacionam Odometria Visual e Deep Learning, não foram encontradas técnicas que utilizem Redes Convolucionais Siamesas com sucesso utilizando informações de profundidade de mapas de disparidade durante esta pesquisa. Este trabalho visa preencher esta lacuna aplicando Deep Learning na estimativa do movimento por de mapas de disparidade em uma arquitetura Siamesa. A arquitetura SiameseVO-Depth proposta neste trabalho é comparada à técnicas do estado da arte em OV utilizando a base de dados KITTI Vision Benchmark Suite. Os resultados demonstram que através da metodologia proposta é possível a estimativa dos valores de uma Odometria Visual ainda que o desempenho não supere técnicas consideradas estado da arte. O trabalho proposto possui menos etapas em comparação com técnicas clássicas de OV por apresentar-se como uma solução fim-a-fim e apresenta nova abordagem no campo de Deep Learning aplicado à Odometria Visual.
DIN, AHMAD. "Inertial and Vision based Navigation and Perception for small UAVs". Doctoral thesis, Politecnico di Torino, 2013. http://hdl.handle.net/11583/2506286.
Pełny tekst źródłaFARRONATO, MARCO. "TECHNOLOGICAL BREAKTHROUGH TOWARDS THE USE OF A NOVEL VISUAL-INERTIAL ODOMETRY SYSTEM AS AN AID FOR THE DIGITALLY-GUIDED INTERVENTION". Doctoral thesis, Università degli Studi di Milano, 2022. https://hdl.handle.net/2434/948169.
Pełny tekst źródłaRingdahl, Viktor. "Stereo Camera Pose Estimation to Enable Loop Detection". Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154392.
Pełny tekst źródłaJansson, Sebastian. "On Vergence Calibration of a Stereo Camera System". Thesis, Linköpings universitet, Institutionen för systemteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84770.
Pełny tekst źródłaLi, Ding. "ESA ExoMars Rover PanCam System Geometric Modeling and Evaluation". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420788556.
Pełny tekst źródłaBak, Adrien. "Cooperation stereo mouvement pour la detection des objets dynamiques". Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112208/document.
Pełny tekst źródłaMany embedded robotic applications could benefit from an explicit detection of mobile objects. To this day, most approaches rely on classification, or on some structural scene analysis (for instance, V-Disparity). During the last few years, we've witnessed a growing interest for collaboration methods, that use actively btw structural analysis and motion analysis. These two processes are, indeed, closely related. In this context, we propose, through this study, two novel approaches that address this issue. While the first one use information from stereo and motion, the second one focuses on monocular systems, and allows us to retrieve a partial information.The first presented approach consists in a novel visual odometry system. We have shown that, even though the wide majority of authors tackle the visual odometry problem as non-linear, it can be shown to be purely linear. We have also shown that our approach achieves performances, as good as, or even better than the ones achieved by high-end IMUs. Given this visual odometry system, we then define a procedure allowing us to detect mobile objects. This procedure relies on a compensation of the ego-motion and a measure of the residual motion. We then lead a reflexion on the causes of limitation and the possible sources of improvement of this system. It appeared that the main parameters of the vision system (baseline, focal length) have a major impact on the performances of our detector. To the best of our knowledge, this impact had never been discussed, prior to our study. However, we think that our conclusion could be used as a set of recommendations, useful for every designer of intelligent vision system.the second part of this work focuses on monocular systems, and more specifically on the concept of C-Velocity. When V-Disparity defined a disparity map transform, allowing an easy detection of specific planes, C-Velocity defines a transform of the optical flow field, using the position of the FoE, allowing an easy detection of specific planes. Through this work, we present a modification of the C-Velocity concept. Instead of using a priori knowledge of the ego-motion (the position of the FoE) in order to determine the scene structure, we use a prior knowledge of the scene structure in order to localize the FoE, thus the translational ego-motion. the first results of this work are promising, and allow us to define several future works