Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Sensors fusion for localisation“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Sensors fusion for localisation" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Sensors fusion for localisation"
Ashokaraj, Immanuel, Antonios Tsourdos, Peter Silson und Brian White. „SENSOR BASED ROBOT LOCALISATION AND NAVIGATION: USING INTERVAL ANALYSIS AND NONLINEAR KALMAN FILTERS.“ Transactions of the Canadian Society for Mechanical Engineering 29, Nr. 2 (Juni 2005): 211–27. http://dx.doi.org/10.1139/tcsme-2005-0014.
Der volle Inhalt der QuelleMeng, Lijun, Zhengang Guo und Chenglong Ma. „Research on multiple damage localisation based on fusion of the Lamb wave ellipse algorithm and RAPID algorithm“. Insight - Non-Destructive Testing and Condition Monitoring 66, Nr. 1 (01.01.2024): 34–40. http://dx.doi.org/10.1784/insi.2024.66.1.34.
Der volle Inhalt der QuelleNikitenko, Agris, Aleksis Liekna, Martins Ekmanis, Guntis Kulikovskis und Ilze Andersone. „Single Robot Localisation Approach for Indoor Robotic Systems through Integration of Odometry and Artificial Landmarks“. Applied Computer Systems 14, Nr. 1 (01.06.2013): 50–58. http://dx.doi.org/10.2478/acss-2013-0006.
Der volle Inhalt der QuelleTibebu, Haileleol, Varuna De-Silva, Corentin Artaud, Rafael Pina und Xiyu Shi. „Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation“. Sensors 22, Nr. 20 (20.10.2022): 8021. http://dx.doi.org/10.3390/s22208021.
Der volle Inhalt der QuelleMoretti, Michele, Federico Bianchi und Nicola Senin. „Towards the development of a smart fused filament fabrication system using multi-sensor data fusion for in-process monitoring“. Rapid Prototyping Journal 26, Nr. 7 (26.06.2020): 1249–61. http://dx.doi.org/10.1108/rpj-06-2019-0167.
Der volle Inhalt der QuelleDonati, Cesare, Martina Mammarella, Lorenzo Comba, Alessandro Biglia, Paolo Gay und Fabrizio Dabbene. „3D Distance Filter for the Autonomous Navigation of UAVs in Agricultural Scenarios“. Remote Sensing 14, Nr. 6 (11.03.2022): 1374. http://dx.doi.org/10.3390/rs14061374.
Der volle Inhalt der QuelleKozłowski, Michał, Raúl Santos-Rodríguez und Robert Piechocki. „Sensor Modalities and Fusion for Robust Indoor Localisation“. ICST Transactions on Ambient Systems 6, Nr. 18 (12.12.2019): 162670. http://dx.doi.org/10.4108/eai.12-12-2019.162670.
Der volle Inhalt der QuelleMerfels, Christian, und Cyrill Stachniss. „Sensor Fusion for Self-Localisation of Automated Vehicles“. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science 85, Nr. 2 (07.03.2017): 113–26. http://dx.doi.org/10.1007/s41064-017-0008-1.
Der volle Inhalt der QuelleCiuffreda, Ilaria, Sara Casaccia und Gian Marco Revel. „A Multi-Sensor Fusion Approach Based on PIR and Ultrasonic Sensors Installed on a Robot to Localise People in Indoor Environments“. Sensors 23, Nr. 15 (05.08.2023): 6963. http://dx.doi.org/10.3390/s23156963.
Der volle Inhalt der QuelleNeuland, Renata, Mathias Mantelli, Bernardo Hummes, Luc Jaulin, Renan Maffei, Edson Prestes und Mariana Kolberg. „Robust Hybrid Interval-Probabilistic Approach for the Kidnapped Robot Problem“. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 29, Nr. 02 (April 2021): 313–31. http://dx.doi.org/10.1142/s0218488521500141.
Der volle Inhalt der QuelleDissertationen zum Thema "Sensors fusion for localisation"
Millikin, R. L. „Sensor fusion for the localisation of birds in flight“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ65871.pdf.
Der volle Inhalt der QuelleWelte, Anthony. „Spatio-temporal data fusion for intelligent vehicle localization“. Thesis, Compiègne, 2020. http://bibliotheque.utc.fr/EXPLOITATION/doc/IFD/2020COMP2572.
Der volle Inhalt der QuelleLocalization is an essential basic capability for vehicles to be able to navigate autonomously on the road. This can be achieved through already available sensors and new technologies (Iidars, smart cameras). These sensors combined with highly accurate maps result in greater accuracy. In this work, the benefits of storing and reusing information in memory (in data buffers) are explored. Localization systems need to perform a high-frequency estimation, map matching, calibration and error detection. A framework composed of several processing layers is proposed and studied. A main filtering layer estimates the vehicle pose while other layers address the more complex problems. High-frequency state estimation relies on proprioceptive measurements combined with GNSS observations. Calibration is essential to obtain an accurate pose. By keeping state estimates and observations in a buffer, the observation models of these sensors can be calibrated. This is achieved using smoothed estimates in place of a ground truth. Lidars and smart cameras provide measurements that can be used for localization but raise matching issues with map features. In this work, the matching problem is addressed on a spatio-temporal window, resulting in a more detailed pictur of the environment. The state buffer is adjusted using the observations and all possible matches. Although using mapped features for localization enables to reach greater accuracy, this is only true if the map can be trusted. An approach using the post smoothing residuals has been developed to detect changes and either mitigate or reject the affected features
Lilja, Robin. „A Localisation and Navigation System for an Autonomous Wheel Loader“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-12157.
Der volle Inhalt der QuelleMatsumoto, Takeshi, und takeshi matsumoto@flinders edu au. „Real-Time Multi-Sensor Localisation and Mapping Algorithms for Mobile Robots“. Flinders University. Computer Science, Engineering and Mathematics, 2010. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20100302.131127.
Der volle Inhalt der QuelleKhairallah, Mahmoud. „Flow-Based Visual-Inertial Odometry for Neuromorphic Vision Sensors“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST117.
Der volle Inhalt der QuelleRather than generating images constantly and synchronously, neuromorphic vision sensors -also known as event-based cameras- permit each pixel to provide information independently and asynchronously whenever brightness change is detected. Consequently, neuromorphic vision sensors do not encounter the problems of conventional frame-based cameras like image artifacts and motion blur. Furthermore, they can provide lossless data compression, higher temporal resolution and higher dynamic range. Hence, event-based cameras conveniently replace frame-based cameras in robotic applications requiring high maneuverability and varying environmental conditions. In this thesis, we address the problem of visual-inertial odometry using event-based cameras and an inertial measurement unit. Exploiting the consistency of event-based cameras with the brightness constancy conditions, we discuss the availability of building a visual odometry system based on optical flow estimation. We develop our approach based on the assumption that event-based cameras provide edge-like information about the objects in the scene and apply a line detection algorithm for data reduction. Line tracking allows us to gain more time for computations and provides a better representation of the environment than feature points. In this thesis, we do not only show an approach for event-based visual-inertial odometry but also event-based algorithms that can be used as stand-alone algorithms or integrated into other approaches if needed
Salehi, Achkan. „Localisation précise d'un véhicule par couplage vision/capteurs embarqués/systèmes d'informations géographiques“. Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC064/document.
Der volle Inhalt der QuelleThe fusion between sensors and databases whose errors are independant is the most re-liable and therefore most widespread solution to the localization problem. Current autonomousand semi-autonomous vehicles, as well as augmented reality applications targeting industrialcontexts exploit large sensor and database graphs that are difficult and expensive to synchro-nize and calibrate. Thus, the democratization of these technologies requires the exploration ofthe possiblity of exploiting low-cost and easily accessible sensors and databases. These infor-mation sources are naturally tainted by higher uncertainty levels, and many obstacles to theireffective and efficient practical usage persist. Moreover, the recent but dazzling successes ofdeep neural networks in various tasks seem to indicate that they could be a viable and low-costalternative to some components of current SLAM systems.In this thesis, we focused on large-scale localization of a vehicle in a georeferenced co-ordinate frame from a low-cost system, which is based on the fusion between a monocularvideo stream, 3d non-textured but georeferenced building models, terrain elevation models anddata either from a low-cost GPS or from vehicle odometry. Our work targets the resolutionof two problems. The first one is related to the fusion via barrier term optimization of VS-LAM and positioning measurements provided by a low-cost GPS. This method is, to the bestof our knowledge, the most robust against GPS uncertainties, but it is more demanding in termsof computational resources. We propose an algorithmic optimization of that approach basedon the definition of a novel barrier term. The second problem is the data association problembetween the primitives that represent the geometry of the scene (e.g. 3d points) and the 3d buil-ding models. Previous works in that area use simple geometric criteria and are therefore verysensitive to occlusions in urban environments. We exploit deep convolutional neural networksin order to identify and associate elements from the map that correspond to 3d building mo-del façades. Although our contributions are for the most part independant from the underlyingSLAM system, we based our experiments on constrained key-frame based bundle adjustment.The solutions that we propose are evaluated on synthetic sequences as well as on real urbandatasets. These experiments show important performance gains for VSLAM/GPS fusion, andconsiderable improvements in the robustness of building constraints to occlusions
Héry, Elwan. „Localisation coopérative de véhicules autonomes communicants“. Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2516.
Der volle Inhalt der QuelleTo be able to navigate autonomously, a vehicle must be accurately localized relatively to all obstacles, such as roadside for lane keeping and vehicles and pedestrians to avoid causing accidents. This PhD thesis deals with the interest of cooperation to improve the localization of cooperative vehicles that exchange information. Autonomous navigation on the road is often based on coordinates provided in a Cartesian frame. In order to better represent the pose of a vehicle with respect to the lane in which it travels, we study curvilinear coordinates with respect to a path stored in a map. These coordinates generalize the curvilinear abscissa by adding a signed lateral deviation from the center of the lane and an orientation relative to the center of the lane taking into account the direction of travel. These coordinates are studied with different track models and using different projections to make the map-matching. A first cooperative localization approach is based on these coordinates. The lateral deviation and the orientation relative to the lane can be known precisely from a perception of the lane borders, but for autonomous driving with other vehicles, it is important to maintain a good longitudinal accuracy. A one-dimensional data fusion method makes it possible to show the interest of the cooperative localization in this simplified case where the lateral deviation, the curvilinear orientation and the relative positioning between two vehicles are accurately known. This case study shows that, in some cases, lateral accuracy can be propagated to other vehicles to improve their longitudinal accuracy. The correlation issues of the errors are taken into account with a covariance intersection filter. An ICP (Iterative Closest Point) minimization algorithm is then used to determine the relative pose between the vehicles from LiDAR points and a 2D polygonal model representing the shape of the vehicle. Several correspondences of the LiDAR points with the model and different minimization approaches are compared. The propagation of absolute vehicle pose using relative poses with their uncertainties is done through non-linear equations that can have a strong impact on consistency. The different dynamic elements surrounding the ego-vehicle are estimated in a Local Dynamic Map (LDM) to enhance the static high definition map describing the center of the lane and its border. In our case, the agents are only communicating vehicles. The LDM is composed of the state of each vehicle. The states are merged using an asynchronous algorithm, fusing available data at variable times. The algorithm is decentralized, each vehicle computing its own LDM and sharing it. As the position errors of the GNSS receivers are biased, a marking detection is introduced to obtain the lateral deviation from the center of the lane in order to estimate these biases. LiDAR observations with the ICP method allow to enrich the fusion with the constraints between the vehicles. Experimental results of this fusion show that the vehicles are more accurately localized with respect to each other while maintaining consistent poses
Jacobson, Adam. „Bio-inspired multi-sensor fusion and calibration for robot place learning and recognition“. Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/116179/1/Adam%20Jacobson%20Thesis.pdf.
Der volle Inhalt der QuelleEricsson, John-Eric, und Daniel Eriksson. „Indoor Positioning and Localisation System with Sensor Fusion : AN IMPLEMENTATION ON AN INDOOR AUTONOMOUS ROBOT AT ÅF“. Thesis, KTH, Maskinkonstruktion (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168841.
Der volle Inhalt der QuelleExamensjobbet presenterar riktlinjer för hur sensorer och algoritmer för inomhuspositionering och lokaliseringssystem med sensorfusion bör väljas. Riktlinjerna är baserade på en omfattande teori och state of the art undersökning. Olika scenarion presenteras för att ge exempel på metoder för att välja sensorer och algoritmer för applikationer. Självklart finns det inga kombinationer som är rätt eller fel, men vissa faktorer är bra att komma ihåg när ett system designas. För att ge exempel på de föreslagna riktlinjerna har ett “Simultaneous Localisation and Mapping” (SLAM) system samt ett Inomhus Positioneringssystem (IPS) designats och implementerats på en inbyggd robotplattform. Det implementerade SLAM systemet baserades på en FastSLAM2algoritm med ultraljudssensorer och det implementerade IPS baserades på en Wifi RSS profileringsmetod som använder en Weibullfördelning. Metoderna, sensorerna och infrastrukturenhar valts utifrån krav som framställts från önskningar av intressenten samt utifrån kunskap från teori och state of the art undersökningen. En kombination av SLAM och IPS har föreslagits och valts att kallas WiFi SLAM för att reducera osäkerheter från de båda metoderna. Tyvärr har ingen kombination implementerats och testats på grund av oväntade problem med plattformen. Systemen simulerades individuellt före implementationen på den inbyggda plattformen. Resultat från dessa simuleringar tydde på att kraven skulle kunna uppfyllas samt gav en indikation av den minsta “set-upen” som behövdes för implementering. Båda de implementerade systemen visade sig ha de förväntade noggrannheterna under testning och med mer tid kunde bättre kalibrering ha skett, vilket förmodligen skulle resulterat i bättre resultat. Från resultaten kunde slutsatsen dras att en kombinerad WiFi SLAM lösning skulle förbättrat resultatet i en större testyta än den som användes. IPS skulle ha ökat sin precision medan SLAM skulle ha ökat sin robusthet. Examensjobbet har visat att det inte finns något exakt sätt att hitta en perfekt sensor och metodlösning. Viktigast är dock viktningen mellan tid, kostnad och kvalitet. Andra viktigafaktorer är att bestämma miljön systemet skall operera i och om systemet är säkerhetskritiskt. Det visade sig även att fusionerad sensordata kommer överträffa resultatet från endast en sensor och att det inte finns någon maxgräns för antalet fusionerade sensorer. Det kräver dock att sensorfusionsalgoritmen är väl kalibrerad, annars kan det motsatta inträffa.
Ladhari, Maroua. „Architecture générique de fusion par approche Top-Down : application à la localisation d’un robot mobile“. Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC052.
Der volle Inhalt der QuelleThe issue that will be addressed in this thesis is the localization of a mobile robot. Equipped with low- cost sensors, the robot aims to exploit the maximum possible amount of information to meet an objective set beforehand. A data fusion problem will be treated in a way that at each situation, the robot will select which information to use to locate itself in a continuous way. The data we will process will be of different types.In our work, two properties of localization are desired: accuracy and confidence. In order to be controlled, the robot must know its position in a precise and reliable way. Indeed, accuracy refers to the degree of uncertainty related to the estimated position. It is returned by a fusion filter. If, in addition, the degree of certainty of being in this uncertainty zone is important, we will have a good confidence contribution and the estimate will be considered as reliable. These two properties are generally related. This is why they are often represented together to characterize the returned estimate of the robot position. In this work, our objective is to simultaneously optimize these two properties.To take advantage of the different existing techniques for an optimal estimation of the robot position, we propose a top-down approach based on the exploitation of environmental map environmental map defined in an absolute reference frame. This approach uses an a priori selection of the best informative measurements among all possible measurement sources. The selection is made according to a given objective (of accuracy and confidence), the current robot state and the data informational contribution.As the data is noisy, imprecise and may also be ambiguous and unreliable, the consideration of these limitations is necessary in order to provide the most accurate and reliable robot position estimation. For this, spatial focusing and a Bayesian network are used to reduce the risk of misdetection. However, in case of ambiguities, these misdetections may occur. A backwards process has been developed in order to react efficiently to these situations and thus achieve the set objectives.The main contributions of this work are on one side the development of a high-level generic and modular multi sensory localization architecture with a top-down process. We used a concept of perceptual triplet which is the set of landmark, sensor and detector to designate each perceptual module. At each time, a prediction and an update steps are performed. For the update step, the system selects the most relevant triplet (in terms of accuracy and confidence) according to an informational criterion. In order to ensure an accurate and relaible localization, our algorithm has been written in such a way that ambiguity aspects can be managed.On the other side, the developed algorithm allows to locate a robot in an environment map. For this purpose, the possibility of bad detections due to ambiguity phenomena has been taken into account in the backward process. Indeed, this process allows on the one hand to correct a bad detection and on the other hand to improve the returned position estimation to meet a desired objective
Bücher zum Thema "Sensors fusion for localisation"
Hucks, John A. Fusion of ground-based sensors for optimal tracking of military targets. Monterey, Calif: Naval Postgraduate School, 1989.
Den vollen Inhalt der Quelle findenG, Buser Rudolph, Warren Frank B, Society of Photo-optical Instrumentation Engineers. und University of Alabama in Huntsville. Center for Applied Optics., Hrsg. Infrared sensors and sensor fusion: 19-21 May, 1987, Orlando, Florida. Bellingham, Wash., USA: SPIE--the International Society for Optical Engineering, 1987.
Den vollen Inhalt der Quelle findenMulti-sensor data fusion with MATLAB. Boca Raton: Taylor & Francis, 2010.
Den vollen Inhalt der Quelle findenOtmar, Loffeld, Centre national de la recherche scientifique (France) und Society of Photo-optical Instrumentation Engineers., Hrsg. Vision systems--sensors, sensor systems, and components: 10-12 June 1996, Besançon, France. Bellingham, Wash: SPIE--the International Society for Optical Engineering, 1996.
Den vollen Inhalt der Quelle findenNetworked multisensor decision and estimation fusion: Based on advanced mathematical methods. Boca Raton, FL: Taylor & Francis, 2012.
Den vollen Inhalt der Quelle findenOtmar, Loffeld, Society of Photo-optical Instrumentation Engineers., European Optical Society und Commission of the European Communities. Directorate-General for Science, Research, and Development., Hrsg. Sensors, sensor systems, and sensor data processing: June 16-17 1997, Munich, FRG. Bellingham, Wash., USA: SPIE, 1997.
Den vollen Inhalt der Quelle findenIntelligent Sensors, Sensor Networks & Information Processing Conference (2nd 2005 Melbourne, Vic.). Proceedings of the 2005 Intelligent Sensors, Sensor Networks & Information Processing Conference: 5-8 December, 2005, Melbourne, Australia. [Piscataway, N.J.]: IEEE, 2006.
Den vollen Inhalt der Quelle findenIntelligent Sensors, Sensor Networks & Information Processing Conference (2nd 2005 Melbourne, Vic.). Proceedings of the 2005 Intelligent Sensors, Sensor Networks & Information Processing Conference: 5-8 December, 2005, Melbourne, Australia. [Piscataway, N.J.]: IEEE, 2006.
Den vollen Inhalt der Quelle findenIEEE/AESS Dayton Chapter Symposium (15th 1998 Fairborn, OH). Sensing the world: Analog sensors & systems across the spectrum : the 15th Annual AESS/IEEE Dayton Section Symposium, Fairborn, OH, 14-15 May 1998. Piscataway, N.J: Institute of Electrical and Electronics Engineers, 1998.
Den vollen Inhalt der Quelle findenGreen, Milford B. Mergers and acquisitions: Geographical and spatial perspectives. London: Routledge, 1990.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Sensors fusion for localisation"
Espinosa, Jose, Mihalis Tsiakkas, Dehao Wu, Simon Watson, Joaquin Carrasco, Peter R. Green und Barry Lennox. „A Hybrid Underwater Acoustic and RF Localisation System for Enclosed Environments Using Sensor Fusion“. In Towards Autonomous Robotic Systems, 369–80. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96728-8_31.
Der volle Inhalt der QuelleMitchell, H. B. „Image Sensors“. In Image Fusion, 9–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11216-4_2.
Der volle Inhalt der QuelleKiaer, Jieun. „Fusion, localisation, and hybridity“. In Delicious Words, 54–70. New York: Routledge, 2020. |: Routledge, 2020. http://dx.doi.org/10.4324/9780429321801-4.
Der volle Inhalt der QuelleMajumder, Bansari Deb, und Joyanta Kumar Roy. „Multifunction Data Fusion“. In Multifunctional Sensors, 49–54. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003350484-4.
Der volle Inhalt der QuelleMitchell, H. B. „Sensors“. In Data Fusion: Concepts and Ideas, 15–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27222-6_2.
Der volle Inhalt der QuelleKoch, Wolfgang. „Characterizing Objects and Sensors“. In Tracking and Sensor Data Fusion, 31–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39271-9_2.
Der volle Inhalt der QuelleSubramanian, Rajesh. „Additional Sensors and Sensor Fusion“. In Build Autonomous Mobile Robot from Scratch using ROS, 457–96. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9645-5_9.
Der volle Inhalt der QuelleSuciu, George, Andrei Scheianu, Cristina Mihaela Bălăceanu, Ioana Petre, Mihaela Dragu, Marius Vochin und Alexandru Vulpe. „Sensors Fusion Approach Using UAVs and Body Sensors“. In Advances in Intelligent Systems and Computing, 146–53. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77700-9_15.
Der volle Inhalt der QuelleVěchet, S., und J. Krejsa. „Sensors Data Fusion via Bayesian Network“. In Recent Advances in Mechatronics, 221–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-05022-0_38.
Der volle Inhalt der QuelleWagner, Jakub, Paweł Mazurek und Roman Z. Morawski. „Fusion of Data from Impulse-Radar Sensors and Depth Sensors“. In Health Information Science, 205–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96009-4_7.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Sensors fusion for localisation"
Jarvis, R. A. „Autonomous Robot Localisation By Sensor Fusion“. In IEEE International Workshop on Emerging Technologies and Factory Automation,. IEEE, 1992. http://dx.doi.org/10.1109/etfa.1992.683295.
Der volle Inhalt der QuelleIzri, Sonia, und Eric Brassart. „Uncertainties quantification criteria for multi-sensors fusion: Application to vehicles localisation“. In Automation (MED 2008). IEEE, 2008. http://dx.doi.org/10.1109/med.2008.4602171.
Der volle Inhalt der QuelleRedzic, Milan, Conor Brennan und Noel E. O'Connor. „Dual-sensor fusion for indoor user localisation“. In the 19th ACM international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2072298.2071948.
Der volle Inhalt der QuelleFranken, Dietrich. „An approximate maximum-likelihood estimator for localisation using bistatic measurements“. In 2018 Sensor Data Fusion: Trends, Solutions, Applications (SDF). IEEE, 2018. http://dx.doi.org/10.1109/sdf.2018.8547074.
Der volle Inhalt der QuelleAlvarado, Biel Piero, Fernando Matia und Ramon Galan. „Improving indoor robots localisation by fusing different sensors“. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8593667.
Der volle Inhalt der QuelleRistic, Branko, Mark Morelande, Alfonso Farina und S. Dulman. „On Proximity-Based Range-Free Node Localisation in Wireless Sensor Networks“. In 2006 9th International Conference on Information Fusion. IEEE, 2006. http://dx.doi.org/10.1109/icif.2006.301734.
Der volle Inhalt der QuelleCorral-Plaza, David, Olaf Reich, Erik Hübner, Matthias Wagner und Inmaculada Medina-Bulo. „A SENSOR FUSION SYSTEM IDENTIFYING COMPLEX EVENTS FOR LOCALISATION ESTIMATION“. In International Conference on Applied Computing 2019. IADIS Press, 2019. http://dx.doi.org/10.33965/ac2019_201912c033.
Der volle Inhalt der QuelleKhoder, Makkawi, Ait-Tmazirte Nourdine, El Badaoui El Najjar Maan und Moubayed Nazih. „Fault Tolerant multi-sensor Data Fusion for vehicle localisation using Maximum Correntropy Unscented Information Filter and α-Rényi Divergence“. In 2020 IEEE 23rd International Conference on Information Fusion (FUSION). IEEE, 2020. http://dx.doi.org/10.23919/fusion45008.2020.9190407.
Der volle Inhalt der QuellePagnottelli, S., S. Taraglio, P. Valigi und A. Zanela. „Visual and laser sensory data fusion for outdoor robot localisation and navigation“. In 2005 12th International Conference on Advanced Robotics. IEEE, 2005. http://dx.doi.org/10.1109/icar.2005.1507409.
Der volle Inhalt der QuelleUney, Murat, Bernard Mulgrew und Daniel Clark. „Cooperative sensor localisation in distributed fusion networks by exploiting non-cooperative targets“. In 2014 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2014. http://dx.doi.org/10.1109/ssp.2014.6884689.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Sensors fusion for localisation"
Cadwallader, L. C. Reliability estimates for selected sensors in fusion applications. Office of Scientific and Technical Information (OSTI), September 1996. http://dx.doi.org/10.2172/425367.
Der volle Inhalt der QuelleLane, Brandon, Lars Jacquemetton, Martin Piltch und Darren Beckett. Thermal calibration of commercial melt pool monitoring sensors on a laser powder bed fusion system. Gaithersburg, MD: National Institute of Standards and Technology, Juli 2020. http://dx.doi.org/10.6028/nist.ams.100-35.
Der volle Inhalt der QuelleBeiker, Sven. Next-generation Sensors for Automated Road Vehicles. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, Februar 2023. http://dx.doi.org/10.4271/epr2023003.
Der volle Inhalt der QuelleMobley, Curtis D. Determining the Scattering Properties of Vertically-Structured Nepheloid Layers from the Fusion of Active and Passive Optical Sensors. Fort Belvoir, VA: Defense Technical Information Center, September 2006. http://dx.doi.org/10.21236/ada630921.
Der volle Inhalt der QuelleKulhandjian, Hovannes. Detecting Driver Drowsiness with Multi-Sensor Data Fusion Combined with Machine Learning. Mineta Transportation Institute, September 2021. http://dx.doi.org/10.31979/mti.2021.2015.
Der volle Inhalt der QuelleKulhandjian, Hovannes. AI-based Pedestrian Detection and Avoidance at Night using an IR Camera, Radar, and a Video Camera. Mineta Transportation Institute, November 2022. http://dx.doi.org/10.31979/mti.2022.2127.
Der volle Inhalt der Quelle