Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Odometry estimation“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Odometry estimation" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Odometry estimation"
Nurmaini, Siti, und Sahat Pangidoan. „Localization of Leader-Follower Robot Using Extended Kalman Filter“. Computer Engineering and Applications Journal 7, Nr. 2 (12.07.2018): 95–108. http://dx.doi.org/10.18495/comengapp.v7i2.253.
Der volle Inhalt der QuelleLi, Q., C. Wang, S. Chen, X. Li, C. Wen, M. Cheng und J. Li. „DEEP LIDAR ODOMETRY“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (05.06.2019): 1681–86. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1681-2019.
Der volle Inhalt der QuelleMartínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez und Jorge Enrique Rodas-Osollo. „Divergent trinocular vision observers design for extended Kalman filter robot state estimation“. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, Nr. 5 (24.09.2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.
Der volle Inhalt der QuelleWu, Qin Fan, Qing Li und Nong Cheng. „Visual Odometry and 3D Mapping in Indoor Environments“. Applied Mechanics and Materials 336-338 (Juli 2013): 348–54. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.348.
Der volle Inhalt der QuelleJiménez, Paulo A., und Bijan Shirinzadeh. „Laser interferometry measurements based calibration and error propagation identification for pose estimation in mobile robots“. Robotica 32, Nr. 1 (06.08.2013): 165–74. http://dx.doi.org/10.1017/s0263574713000660.
Der volle Inhalt der QuelleGonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier und Roland Siegwart. „Combined visual odometry and visual compass for off-road mobile robots localization“. Robotica 30, Nr. 6 (05.10.2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.
Der volle Inhalt der QuelleValiente García, David, Lorenzo Fernández Rojo, Arturo Gil Aparicio, Luis Payá Castelló und Oscar Reinoso García. „Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images“. Journal of Robotics 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/797063.
Der volle Inhalt der QuelleJung, Changbae, und Woojin Chung. „Calibration of Kinematic Parameters for Two Wheel Differential Mobile Robots by Using Experimental Heading Errors“. International Journal of Advanced Robotic Systems 8, Nr. 5 (01.01.2011): 68. http://dx.doi.org/10.5772/50906.
Der volle Inhalt der QuelleThapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla und Ravi K. Patel. „A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges“. Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, Nr. 5 (22.09.2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.
Der volle Inhalt der QuelleLee, Kyuman, und Eric N. Johnson. „Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight“. Sensors 20, Nr. 8 (14.04.2020): 2209. http://dx.doi.org/10.3390/s20082209.
Der volle Inhalt der QuelleDissertationen zum Thema "Odometry estimation"
Masson, Clément. „Direction estimation using visual odometry“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.
Der volle Inhalt der QuelleDetta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
Holmqvist, Niclas. „HANDHELD LIDAR ODOMETRY ESTIMATION AND MAPPING SYSTEM“. Thesis, Mälardalens högskola, Inbyggda system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41137.
Der volle Inhalt der QuelleCHEN, HONGYI. „GPS-oscillation-robust Localization and Visionaided Odometry Estimation“. Thesis, KTH, Maskinkonstruktion (Inst.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247299.
Der volle Inhalt der QuelleGPS/IMU integrerade system används ofta för navigering av fordon. Algoritmen för detta kopplade system är normalt baserat på ett Kalmanfilter. Ett problem med systemet är att oscillerade GPS mätningar i stadsmiljöer enkelt kan leda till en lokaliseringsdivergens. Dessutom kan riktningsuppskattningen vara känslig för magnetiska störningar om den är beroende av en IMU med integrerad magnetometer. Rapporten försöker lösa lokaliseringsproblemet som skapas av GPS-oscillationer och avbrott med hjälp av ett adaptivt förlängt Kalmanfilter (AEKF). När det gäller riktningsuppskattningen används stereovisuell odometri (VO) för att försvaga effekten av magnetiska störningar genom sensorfusion. En Visionsstödd AEKF-baserad algoritm testas i fall med både goda GPS omständigheter och med oscillationer i GPS mätningar med magnetiska störningar. Under de fallen som är aktuella är algoritmen verifierad för att överträffa det konventionella utökade Kalmanfilteret (CEKF) och ”Unscented Kalman filter” (UKF) när det kommer till positionsuppskattning med 53,74% respektive 40,09% samt minska fel i riktningsuppskattningen.
Rao, Anantha N. „Learning-based Visual Odometry - A Transformer Approach“. University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.
Der volle Inhalt der QuelleAwang, Salleh Dayang Nur Salmi Dharmiza. „Study of vehicle localization optimization with visual odometry trajectory tracking“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS601.
Der volle Inhalt der QuelleWith the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis
Ay, Emre. „Ego-Motion Estimation of Drones“. Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210772.
Der volle Inhalt der QuelleFör att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.
Lee, Hong Yun. „Deep Learning for Visual-Inertial Odometry: Estimation of Monocular Camera Ego-Motion and its Uncertainty“. The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu156331321922759.
Der volle Inhalt der QuelleRingdahl, Viktor. „Stereo Camera Pose Estimation to Enable Loop Detection“. Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154392.
Der volle Inhalt der QuelleReady, Bryce Benson. „Filtering Techniques for Pose Estimation with Applications to Unmanned Air Vehicles“. BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3490.
Der volle Inhalt der QuelleKim, Jae-Hak, und Jae-Hak Kim@anu edu au. „Camera Motion Estimation for Multi-Camera Systems“. The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.
Der volle Inhalt der QuelleBuchteile zum Thema "Odometry estimation"
Santamaria-Navarro, A., J. Solà und J. Andrade-Cetto. „Odometry Estimation for Aerial Manipulators“. In Springer Tracts in Advanced Robotics, 219–28. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12945-3_15.
Der volle Inhalt der QuellePoddar, Shashi, Rahul Kottath und Vinod Karar. „Motion Estimation Made Easy: Evolution and Trends in Visual Odometry“. In Recent Advances in Computer Vision, 305–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03000-1_13.
Der volle Inhalt der QuelleClement, Lee, Valentin Peretroukhin und Jonathan Kelly. „Improving the Accuracy of Stereo Visual Odometry Using Visual Illumination Estimation“. In Springer Proceedings in Advanced Robotics, 409–19. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50115-4_36.
Der volle Inhalt der QuelleGuerrero, Pablo, und Javier Ruiz-del-Solar. „Improving Robot Self-localization Using Landmarks’ Poses Tracking and Odometry Error Estimation“. In RoboCup 2007: Robot Soccer World Cup XI, 148–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68847-1_13.
Der volle Inhalt der QuelleGalarza, Juan, Esteban Pérez, Esteban Serrano, Andrés Tapia und Wilbert G. Aguilar. „Pose Estimation Based on Monocular Visual Odometry and Lane Detection for Intelligent Vehicles“. In Lecture Notes in Computer Science, 562–66. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-95282-6_40.
Der volle Inhalt der QuelleZhang, Xue-bo, Cong-yuan Wang, Yong-chun Fang und Ke-xin Xing. „An Extended Kalman Filter-Based Robot Pose Estimation Approach with Vision and Odometry“. In Wearable Sensors and Robots, 539–52. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2404-7_41.
Der volle Inhalt der QuelleNguyen, Huu Hung, Quang Thi Nguyen, Cong Manh Tran und Dong-Seong Kim. „Adaptive Essential Matrix Based Stereo Visual Odometry with Joint Forward-Backward Translation Estimation“. In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 127–37. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63083-6_10.
Der volle Inhalt der QuelleMusleh, Basam, David Martin, Arturo de la Escalera, Domingo Miguel Guinea und Maria Carmen Garcia-Alegre. „Estimation and Prediction of the Vehicle’s Motion Based on Visual Odometry and Kalman Filter“. In Advanced Concepts for Intelligent Vision Systems, 491–502. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33140-4_43.
Der volle Inhalt der QuelleAlonso Martínez-García, Edgar, und Luz Abril Torres-Méndez. „4WD Robot Posture Estimation by Radial Multi-View Visual Odometry“. In Applications of Mobile Robots. IntechOpen, 2019. http://dx.doi.org/10.5772/intechopen.79130.
Der volle Inhalt der QuelleHarr, M., und C. Schäfer. „Robust Odometry Estimation for Automated Driving and Map Data Collection“. In AUTOREG 2017, 91–102. VDI Verlag, 2017. http://dx.doi.org/10.51202/9783181022924-91.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Odometry estimation"
Pereira, Fabio Irigon, Gustavo Ilha, Joel Luft, Marcelo Negreiros und Altamiro Susin. „Monocular Visual Odometry with Cyclic Estimation“. In 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2017. http://dx.doi.org/10.1109/sibgrapi.2017.7.
Der volle Inhalt der QuelleSong, Xiaojing, Lakmal D. Seneviratne, Kaspar Althoefer, Zibin Song und Yahya H. Zweiri. „Visual Odometry for Velocity Estimation of UGVs“. In 2007 International Conference on Mechatronics and Automation. IEEE, 2007. http://dx.doi.org/10.1109/icma.2007.4303790.
Der volle Inhalt der QuelleFanani, Nolang, Alina Sturck, Marc Barnada und Rudolf Mester. „Multimodal scale estimation for monocular visual odometry“. In 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017. http://dx.doi.org/10.1109/ivs.2017.7995955.
Der volle Inhalt der QuelleMou, Wei, Han Wang und Gerald Seet. „Efficient visual odometry estimation using stereo camera“. In 2014 11th IEEE International Conference on Control & Automation (ICCA). IEEE, 2014. http://dx.doi.org/10.1109/icca.2014.6871128.
Der volle Inhalt der QuelleSo, Edmond Wai Yan, Tetsuo Yoshimitsu und Takashi Kubota. „Hopping Odometry: Motion Estimation with Selective Vision“. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009). IEEE, 2009. http://dx.doi.org/10.1109/iros.2009.5354065.
Der volle Inhalt der QuelleKerl, Christian, Jurgen Sturm und Daniel Cremers. „Robust odometry estimation for RGB-D cameras“. In 2013 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2013. http://dx.doi.org/10.1109/icra.2013.6631104.
Der volle Inhalt der QuelleNowruzi, Farzan, Dhanvin Kolhatkar, Prince Kapoor und Robert Laganiere. „Point Cloud based Hierarchical Deep Odometry Estimation“. In 7th International Conference on Vehicle Technology and Intelligent Transport Systems. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010442901120121.
Der volle Inhalt der QuelleLu, Tongwei, Shihui Ai, Yudian xiong und Yongyuan Jiang. „Monocular visual odometry-based 3D-2D motion estimation“. In Automatic Target Recognition and Navigation, herausgegeben von Jayaram K. Udupa, Hanyu Hong und Jianguo Liu. SPIE, 2018. http://dx.doi.org/10.1117/12.2286251.
Der volle Inhalt der Quelleyuan, Rong, Hongyi Fan und Benjamin Kimia. „Dissecting scale from pose estimation in visual odometry“. In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.170.
Der volle Inhalt der QuelleCho, Hae Min, HyungGi Jo, Seongwon Lee und Euntai Kim. „Odometry Estimation via CNN using Sparse LiDAR Data“. In 2019 16th International Conference on Ubiquitous Robots (UR). IEEE, 2019. http://dx.doi.org/10.1109/urai.2019.8768571.
Der volle Inhalt der Quelle