Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Indoor robotics.

Dissertationen zum Thema „Indoor robotics“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Indoor robotics" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Vojta, Jakub. „Bezpečnost provozu mobilních robotů v indoor prostředí“. Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-232641.

Der volle Inhalt der Quelle
Annotation:
During cooperation with the Bender Robotics company a need for operational safety assessment of an autonomous mobile robot (AMR) emerged. Operational safety evaluation is a step towards mass production of the studied robot. Market entry of a product requires a string of various actions and safety assessment is one of them. For risk identification and severity rating were used legal requirements, best practice given by standards, FMEA method, experiment and RIPRAN method. Threats, possible scenarios and risks analysis is systematically discussed through all areas of operation of the robot, from design and construction to control software. All the steps are described in logical order. Starting with information research, going on with series of analysis and ending with suggestions for increased operational safety of autonomous mobile robots.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pettersson, Rasmus. „Continuous localization in indoor shifting environment“. Thesis, Uppsala universitet, Fasta tillståndets elektronik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-326270.

Der volle Inhalt der Quelle
Annotation:
In this Master Thesis different approaches to mobile localization within construction environments are investigated. At first an overview of different sensors commonly used within localization is presented together with different map representations and a system consisting of a laser scanner and wheel encoders is chosen. The hardware is prepared for the open source ROS environment and three different algorithms for localization are tested. Two algorithms, Gmapping and HectorSLAM, used for Simultaneous Localization and Mapping, are compared. The best map is then used by a Monte Carlo localization algorithm, AMCL, for autonomous navigation. It is found that HectorSLAM produces the most accurate map, given that the grid refinement level is fine enough for the environment. It is also found that the maximum Kullback Leiber distance, used in AMCL, needs to be calibrated in order to perform a sufficient navigation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Perko, Eric Michael. „Precision Navigation for Indoor Mobile Robots“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1345513785.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yang, Yin. „Nonlinear control and state estimation of holonomic indoor airship“. Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106573.

Der volle Inhalt der Quelle
Annotation:
Three full-state optimal controllers are proposed to fulfill the requirements of flying an indoor holonomic airship in real-time, namely, hovering control, set-point control and continuous reference tracking. In the hovering control design, the airship is assumed to be a quasi stationary plant, and an infinite horizon linear quadratic regulator (LQR) operating in again scheduling manner is employed. Meanwhile, a controller based on the state-dependent Riccati equation (SDRE) and ad hoc feedforward compensation is synthesized to tackle the set-point control problem. Lastly, a continuous tracker is dedicated to rejecting alldisturbances along any given reference trajectory. With reasonable computation cost, the proposed controllers show significant advantages over the PD controller in both simulationand real flights. A state estimator designed with the unscented Kalman filter is also implemented in this work. The purpose is to track the airship state for the feedback loop and other navigation tasks by fusing information from the on-board (an inertia measurement unit and a laser range finder) and/or o-board (an infra-red based motion capture system) sensors. A loosely coupled sensor fusion scheme is employed and validated in experiments.
Trois méthodes optimales de commande à retour d'état complet sont proposées ici afin d'accomplir les exigences du vol intérieur d'un ballon dirigeable holonomique, et ceci, en temps réel. Les manoeuvres exigées incluent le maintien d'une position stationnaire, le mouvement vers un point et suivant une trajectoire continue. Pour la régularisation, un modèle quasi-stationnaire du ballon est assumé et un régulateur quadratique-linéaire (LQR) à horizon infini est utilisé dans un mode d'échelonnage des gains. De plus, les mouvements vers un point sont accomplis en se basant sur le retour d'état pour résoudre l'équation de Riccati qui en dépend et pour compenser la dynamique non-linéaire. Finalement, les perturbations autour d'une trajectoire continue sont rejetées par une méthode dédiée afin de suivre cette trajectoire. Preuves expérimentales et simulées à l'appui, ces méthodes de commande démontrent des avantages significatifs par rapport aux méthodes classiques de commande porportionelle-dérivée (PD), et ceci, avec des exigences modérées sur le système informatique. Ce travail de thèse démontre aussi l'utilisation d'un filtre de Kalman non-parfumé (UKF) pour estimer l'état du système. Cette estimation produit le retour d'ètat complet nècessaire aux mèthodes de commande et à d'autres tâches de navigation en combinant les mesures de différents systèmes enbarqués (système inertiel et télédétecteur par laser) et non-embarqués (système de capture du mouvement à l'infra-rouge). Une méthode de fusion sensorielle à séparation partielle est utilisée et validée expérimentalement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gandhi, Anall Vijaykumar. „An Accuracy Improvement Method for Cricket Indoor Location System“. Wright State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=wright1369316496.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Valdmanis, Mikelis. „Localization and navigation of a holonomic indoor airship using on-board sensors“. Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97204.

Der volle Inhalt der Quelle
Annotation:
Two approaches to navigation and localization of a holonomic, unmanned, indoor airship capable of 6-degree-of-freedom (DOF) motion using on-board sensors are presented. First, obstacle avoidance and primitive navigation were attempted using a light-weight video camera. Two optical flow algorithms were investigated. Optical flow estimates the motion of the environment relative to the camera by computing temporal and spatial fluctuations of image brightness. Inferences on the nature of the visible environment, such as obstacles, would then be made based on the optical flow field. Results showed that neither algorithm would be adequate for navigation of the airship.Localization of the airship in a restricted state space – three translational DOF and yaw rotation – and a known environment was achieved using an advanced Monte Carlo Localization (MCL) algorithm and a laser range scanner. MCL is a probabilistic algorithm that generates many random estimates, called particles, of potential airship states. During each operational time step each particle's location is adjusted based on airship motion estimates and particles are assigned weights by evaluating simulated sensor measurements for the particles' poses against the actual measurements. A new set of particles is drawn from the previous set with probability proportional to the weights. After several time steps the set converges to the true position of the airship. The MCL algorithm achieves global localization, position tracking, and recovery from the "kidnapped robot" problem. Results from off-line processing of airship flight data, using MCL, are presented and the possibilities for on-line implementation are discussed.
Deux approches de navigation et localisation d'un drone intérieur équipé de capteurs et capable de six degrés de liberté seront présentées. Premièrement, des vols ayant comme simple but d'éviter des obstacles et de naviguer le drone ont été exécutés à l'aide d'une caméra vidéo. Deux algorithmes de flux optique ont été étudiés. Le flux optique estime le déplacement de l'environnement relatif à la caméra en calculant les variations dans la clarté de l'image. Les traits caractéristiques de l'environnement, comme les obstacles, sont alors déterminés en se basant sur le champ de flux optique. Les résultats démontrent que ni l'un ni l'autre des algorithmes sont adéquats pour naviguer le drone.La localisation du drone dans une représentation d'état, caractérisée par trois degrés de liberté en translation et par la vitesse de lacet, ainsi que dans un environnement connu a été accomplie en utilisant l'algorithme avancé de Localisation Monte Carlo (MCL) et un télémètre laser. MCL est un algorithme probabiliste qui génère aléatoirement plusieurs estimés, nommés particules, d'états potentiels du drone. À chaque incrément de temps, la position de chaque particule est ajustée selon les déplacements estimés du drone et ces particules sont pondérées en comparant les valeurs estimées du capteur avec les valeurs actuelles. Ensuite, un nouvel ensemble de particules est créé à partir du précédent en considérant la pondération des particules. Après plusieurs incréments de temps, l'ensemble converge vers la position réelle du drone. L'algorithme MCL accompli alors une localisation globale, un suivi de position et une résolution du problème du robot « kidnappé ». L'analyse hors-ligne des résultats avec l'algorithme MCL est présentée et les possibilités d'implémenter cette méthode en ligne sont discutées.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Szenher, Matthew D. „Visual homing in dynamic indoor environments“. Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3193.

Der volle Inhalt der Quelle
Annotation:
Our dissertation concerns robotic navigation in dynamic indoor environments using image-based visual homing. Image-based visual homing infers the direction to a goal location S from the navigator’s current location C using the similarity between panoramic images IS and IC captured at those locations. There are several ways to compute this similarity. One of the contributions of our dissertation is to identify a robust image similarity measure – mutual image information – to use in dynamic indoor environments. We crafted novel methods to speed the computation of mutual image information with both parallel and serial processors and demonstrated that these time-savers had little negative effect on homing success. Image-based visual homing requires a homing agent tomove so as to optimise themutual image information signal. As the mutual information signal is corrupted by sensor noise we turned to the stochastic optimisation literature for appropriate optimisation algorithms. We tested a number of these algorithms in both simulated and real dynamic laboratory environments and found that gradient descent (with gradients computed by one-sided differences) works best.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fernandez, labrador Clara. „Indoor Scene Understanding using Non-Conventional Cameras“. Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCK037.

Der volle Inhalt der Quelle
Annotation:
Les humains sont en mesure d’interpréter l’environnement qui les entourent avec peu d’effort grâce à système visuel très performant. Par analogie, un système de vision capable de recueillir les mêmes informations sur l’environnement est hautement souhaitable en robotique autonome pour effectuer des tâches complexes et ainsi interagir avec les humains.À cet égard, nous nous sommes particulièrement intéressés aux environnements intérieurs, dans lesquels les humains passent presque toute leur vie. Dans ce travail, pour faire une analyse efficace et rapide des scènes, nous avons opté pour l’utilisation de caméras non conventionnelles : l’imagerie 360° et les capteurs 3D. Ces systèmes ont la particularité d’acquérir en une seule prise de vue soit la totalité de l’environnement qui entoure le robot (caméras 360°) soit l’information 3D.C’est ainsi que cette thèse aborde les problèmes de description hiérarchique d’une scène d’intérieur avec des capteurs non conventionnels allant de l’estimation de la disposition des pièces ; de la détection et la localisation des objets à la modélisation de la forme des objets 3D.Ces différents points font l’objet de contribution dans ce travail. Dans un premier temps, nous nous sommes intéressés à l'estimation de la disposition 3D de la pièce à partir d'une seule image à 360°. Pour ce faire, nous exploitons l'hypothèse de Manhattan World et les techniques d'apprentissage profond pour proposer des modèles qui gèrent les parties occultées de la pièce sur l'image. A vu de la particularité des images considérées, nous avons développé de nouveaux filtres de convolution d’image tenant compte des fortes distorsions des images équirectangulaires.Par la suite, et dans l’objectif de permettre au robot de faire une analyse contextuelle de hauts niveaux de la scène qui l’entoure, nous nous sommes intéressés au problème de la localisation et de la segmentation des objets. C’est ainsi que nous avons une nouvelle fois exploité les images 360° en tenant compte de la disposition des objets 2D dans l’image dans le but de les décrire par leur modèle 3D en adéquation avec la disposition de la pièce préalablement estimée.La dernière contribution de ce travail tire parti des capteurs 3D pour étudier la forme des objets. Dans ce cadre, nous utilisons une modélisation explicite de la non-rigidité de objets et caractérisons leurs symétries afin de détecter, par un apprentissage profond non supervisé, ces points d’intérêt 3D.Toutes ces contributions nous ont permis de faire progresser l’état de l’art sur les problèmes posés et ont toutes fait l’objet d’évaluation sur des bases de données de référence dans notre communauté
Humans understand environments effortlessly, under a wide variety of conditions, by the virtue of visual perception. Computer vision for similar visual understanding is highly desirable, so that machines can perform complex tasks by interacting with the real world, to assist or entertain humans. In this regard, we are particularly interested in indoor environments, where humans spend nearly all their lifetime.This thesis specifically addresses the problems that arise during the quest of the hierarchical visual understanding of indoor scenes.On the side of sensing the wide 3D world, we propose to use non-conventional cameras, namely 360º imaging and 3D sensors. On the side of understanding, we aim at three key aspects: room layout estimation; object detection, localization and segmentation; and object category shape modeling, for which novel and efficient solutions are provided.The focus of this thesis is on the following underlying challenges. First, the estimation of the 3D room layout from a single 360º image is investigated, which is used for the highest level of scene modelling and understanding. We exploit the assumption of Manhattan World and deep learning techniques to propose models that handle invisible parts of the room on the image, generalizing to more complex layouts. At the same time, new methods to work with 360º images are proposed, highlighting a special convolution that compensates the equirectangular image distortions.Second, considering the importance of context for scene understanding, we study the problem of object localization and segmentation, adapting the problem to leverage 360º images. We also exploit layout-objects interaction to lift detected 2D objects into the 3D room model.The final line of work of this thesis focuses on 3D object shape analysis. We use an explicit modelling of non-rigidity and a high-level notion of object symmetry to learn, in an unsupervised manner, 3D keypoints that are order-wise correspondent as well as geometrically and semantically consistent across objects in a category.Our models advance state-of-the-art on the aforementioned tasks, when each evaluated on respective reference benchmarks
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Xiao, Zhuoling. „Robust indoor positioning with lifelong learning“. Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:218283f1-e28a-4ad0-9637-e2acd67ec394.

Der volle Inhalt der Quelle
Annotation:
Indoor tracking and navigation is a fundamental need for pervasive and context-aware applications. However, no practical and reliable indoor positioning solution is available at present. The major challenge of a practical solution lies in the fact that only the existing devices and infrastructure can be utilized to achieve high positioning accuracy. This thesis presents a robust indoor positioning system with the lifelong learning ability. The typical features of the proposed solution is low-cost, accurate, robust, and scalable. This system only takes the floor plan and the existing devices, e.g. phones, pads, etc. and infrastructure such as WiFi/BLE access points for the sake of practicality. This system has four closely correlated components including, non-line-of-sight identification and mitigation (NIMIT), robust pedestrian dead reckoning (R-PDR), lightweight map matching (MapCraft), and lifelong learning. NIMIT projects the received signal strength (RSS) from WiFi/BLE to locations. The R-PDR component converts the data from inertial measurement unit (IMU) sensors ubiquitous in mobile devices and wearables to the trajectories of the user. Then MapCraft fuses trajectories estimated from the R-PDR and the coarse location information from NIMIT with the floor plan and provides accurate location estimations. The lifelong learning component then learns the various parameters used in all other three components in an unsupervised manner, which continuously improves the the positioning accuracy of the system. Extensive real world experiments in multiple sites show how the proposed system outperforms state-of-the art approaches, demonstrating excellent sub-meter positioning accuracy and accurate reconstruction of tortuous trajectories with zero training effort. As proof of its robustness, we also demonstrate how it is able to accurately track the position regardless of the users, devices, attachments, and environments. We believe that such an accurate and robust approach will enable always-on background localization, enabling a new era of location-aware applications to be developed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Selin, Magnus. „Efficient Autonomous Exploration Planning of Large-Scale 3D-Environments : A tool for autonomous 3D exploration indoor“. Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-163329.

Der volle Inhalt der Quelle
Annotation:
Exploration is of interest for autonomous mapping and rescue applications using unmanned vehicles. The objective is to, without any prior information, explore all initially unmapped space. We present a system that can perform fast and efficient exploration of large scale arbitrary 3D environments. We combine frontier exploration planning (FEP) as a global planning strategy, together with receding horizon planning (RH-NBVP) for local planning. This leads to plans that incorporate information gain along the way, but do not get stuck in already explored regions. Furthermore, we make the potential information gain estimation more efficient, through sparse ray-tracing, and caching of already estimated gains. The worked carried out in this thesis has been published as a paper in Robotand Automation letters and presented at the International Conference on Robotics and Automation in Montreal 2019.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Althaus, Philipp. „Indoor Navigation for Mobile Robots : Control and Representations“. Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3644.

Der volle Inhalt der Quelle
Annotation:

This thesis deals with various aspects of indoor navigationfor mobile robots. For a system that moves around in ahousehold or office environment,two major problems must betackled. First, an appropriate control scheme has to bedesigned in order to navigate the platform. Second, the form ofrepresentations of the environment must be chosen.

Behaviour based approaches have become the dominantmethodologies for designing control schemes for robotnavigation. One of them is the dynamical systems approach,which is based on the mathematical theory of nonlineardynamics. It provides a sound theoretical framework for bothbehaviour design and behaviour coordination. In the workpresented in this thesis, the approach has been used for thefirst time to construct a navigation system for realistic tasksin large-scale real-world environments. In particular, thecoordination scheme was exploited in order to combinecontinuous sensory signals and discrete events for decisionmaking processes. In addition, this coordination frameworkassures a continuous control signal at all times and permitsthe robot to deal with unexpected events.

In order to act in the real world, the control system makesuse of representations of the environment. On the one hand,local geometrical representations parameterise the behaviours.On the other hand, context information and a predefined worldmodel enable the coordination scheme to switchbetweensubtasks. These representations constitute symbols, on thebasis of which the system makes decisions. These symbols mustbe anchored in the real world, requiring the capability ofrelating to sensory data. A general framework for theseanchoring processes in hybrid deliberative architectures isproposed. A distinction of anchoring on two different levels ofabstraction reduces the complexity of the problemsignificantly.

A topological map was chosen as a world model. Through theadvanced behaviour coordination system and a proper choice ofrepresentations,the complexity of this map can be kept at aminimum. This allows the development of simple algorithms forautomatic map acquisition. When the robot is guided through theenvironment, it creates such a map of the area online. Theresulting map is precise enough for subsequent use innavigation.

In addition, initial studies on navigation in human-robotinteraction tasks are presented. These kinds of tasks posedifferent constraints on a robotic system than, for example,delivery missions. It is shown that the methods developed inthis thesis can easily be applied to interactive navigation.Results show a personal robot maintaining formations with agroup of persons during social interaction.

Keywords:mobile robots, robot navigation, indoornavigation, behaviour based robotics, hybrid deliberativesystems, dynamical systems approach, topological maps, symbolanchoring, autonomous mapping, human-robot interaction

APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Karlsson, Ahlexander, und Robert Skoglund. „Video-rate environment recognition through depth image plane segmentation for indoor service robot applications on an embedded system“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35595.

Der volle Inhalt der Quelle
Annotation:
As personal service robots are expected to gain widespread use in the near future there is a need for these robots to function properly in a large number of different environments. In order to acquire such an understanding this thesis focuses on implementing a depth image based planar segmentation method based on the detection of 3-D edges in video-rate speed on an embedded system. The use of plane segmentation as a mean of understanding an unknown environment was chosen after a thorough literature review that indicated that this was the most promising approach capable of reaching video-rate speeds. The camera used to capture depth images is a Kinect for Xbox One, which makes video-rate speed 30 fps, as it is suitable for use in indoor environments and the embedded system is a Jetson TX1 which is capable of running GPU-accelerated algorithms. The results show that the implemented method is capable of segmenting depth images at video-rate speed at half the original resolution. However, full-scale depth images are only segmented at 10-12 fps depending on the environment which is not a satisfactory result.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Falomir, Llansola Zoe. „Qualitative Distances and Qualitative Description of Images for Indoor Scene Description and Recognition in Robotics“. Doctoral thesis, Universitat Jaume I, 2011. http://hdl.handle.net/10803/52897.

Der volle Inhalt der Quelle
Annotation:

The automatic extraction of knowledge from the world by a robotic system as human beings interpret their environment through their senses is still an unsolved task in Artificial Intelligence. A robotic agent is in contact with the world through its sensors and other electronic components which obtain and process mainly numerical information. Sonar, infrared and laser sensors obtain distance information. Webcams obtain digital images that are represented internally as matrices of red, blue and green (RGB) colour coordinate values. All this numerical values obtained from the environment need a later interpretation in order to provide the knowledge required by the robotic agent in order to carry out a task.

Similarly, light wavelengths with specific amplitude are captured by cone cells of human eyes obtaining also stimulus without meaning. However, the information that human beings can describe and remember from what they see is expressed using words, that is qualitatively.

The exact process carried out after our eyes perceive light wavelengths and our brain interpret them is quite unknown. However, a real fact in human cognition is that people go beyond the purely perceptual experience to classify things as members of categories and attach linguistic labels to them.

As the information provided by all the electronic components incorporated in a robotic agent is numerical, the approaches that first appeared in the literature giving an interpretation of this information followed a mathematical trend. In this thesis, this problem is addressed from the other side, its main aim is to process these numerical data in order to obtain qualitative information as human beings can do.

The research work done in this thesis tries to narrow the gap between the acquisition of low level information by robot sensors and the need of obtaining high level or qualitative information for enhancing human-machine communication and for applying logical reasoning processes based on concepts. Moreover, qualitative concepts can be added a meaning by relating them to others. They can be used for reasoning applying qualitative models that have been developed in the last twenty years for describing and interpreting metrical and mathematical concepts such as orientation, distance, velocity, acceleration, and so on. And they can be also understood by human-users both written and read aloud.

The first contributions presented are the definition of a method for obtaining fuzzy distance patterns (which include qualitative distances such as ‘near’, far’, ‘very far’ and so on) from the data obtained by any kind of distance sensors incorporated in a mobile robot and the definition of a factor to measure the dissimilarity between those fuzzy patterns. Both have been applied to the integration of the distances obtained by the sonar and laser distance sensors incorporated in a Pioneer 2 dx mobile robot and, as a result, special obstacles have been detected as ‘glass window’, ‘mirror’, and so on. Moreover, the fuzzy distance patterns provided have been also defuzzified in order to obtain a smooth robot speed and used to classify orientation reference systems into ‘open’ (it defines an open space to be explored) or ‘closed’.

The second contribution presented is the definition of a model for qualitative image description (QID) by applying the new defined models for qualitative shape and colour description and the topology model by Egenhofer and Al-Taha [1992] and the orientation models by Hernández [1991] and Freksa [1992]. This model can qualitatively describe any kind of digital image and is independent of the image segmentation method used. The QID model have been tested in two scenarios in robotics: (i) the description of digital images captured by the camera of a Pioneer 2 dx mobile robot and (ii) the description of digital images of tile mosaics taken by an industrial camera located on a platform used by a robot arm to assemble tile mosaics.

In order to provide a formal and explicit meaning to the qualitative description of the images generated, a Description Logic (DL) based ontology has been designed and presented as the third contribution. Our approach can automatically process any random image and obtain a set of DL-axioms that describe it visually and spatially. And objects included in the images are classified according to the ontology schema using a DL reasoner. Tests have been carried out using digital images captured by a webcam incorporated in a Pioneer 2 dx mobile robot. The images taken correspond to the corridors of a building at University Jaume I and objects with them have been classified into ‘walls’, ‘floor’, ‘office doors’ and ‘fire extinguishers’ under different illumination conditions and from different observer viewpoints.

The final contribution is the definition of a similarity measure between qualitative descriptions of shape, colour, topology and orientation. And the integration of those measures into the definition of a general similarity measure between two qualitative descriptions of images. These similarity measures have been applied to: (i) extract objects with similar shapes from the MPEG7 CE Shape-1 library; (ii) assemble tile mosaics by qualitative shape and colour similarity matching; (iii) compare images of tile compositions; and (iv) compare images of natural landmarks in a mobile robot world for their recognition.

The contributions made in this thesis are only a small step forward in the direction of enhancing robot knowledge acquisition from the world. And it is also written with the aim of inspiring others in their research, so that bigger contributions can be achieved in the future which can improve the life quality of our society.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Vallivaara, I. (Ilari). „Simultaneous localization and mapping using the indoor magnetic field“. Doctoral thesis, Oulun yliopisto, 2018. http://urn.fi/urn:isbn:9789526217741.

Der volle Inhalt der Quelle
Annotation:
Abstract The Earth’s magnetic field (MF) has been used for navigation for centuries. Man-made metallic structures, such as steel reinforcements in buildings, cause local distortions to the Earth’s magnetic field. Up until the recent decade, these distortions have been mostly considered as a source of error in indoor localization, as they interfere with the compass direction. However, as the distortions are temporally stable and spatially distinctive, they provide a unique magnetic landscape that can be used for constructing a map for indoor localization purposes, as noted by recent research in the field. Most approaches rely on manually collecting the magnetic field map, a process that can be both tedious and error-prone. In this thesis, the map is collected by a robotic platform with minimal sensor equipment. It is shown that a mere magnetometer along with odometric information suffices to construct the map via a simultaneous localization and mapping (SLAM) procedure that builds on the Rao-Blackwellized particle filter as means for recursive Bayesian estimation. Furthermore, the maps are shown to achieve decimeter level localization accuracy that combined with the extremely low-cost hardware requirements makes the presented methods very lucrative for domestic robots. In addition, general auxiliary methods for effective sampling and dealing with uncertainties are presented. Although the methods presented here are devised in mobile robotics context, most of them are also applicable to mobile device-based localization, for example, with little modifications. Magnetic field localization offers a promising alternative to WiFi-based methods for achieving GPS-level localization indoors. This is motivated by the rapidly growing indoor location market
Tiivistelmä Maan magneettikenttään perustuvat kompassit ovat ohjanneet merenkäyntiä vuosisatojen ajan. Rakennusten metallirakenteet aiheuttavat paikallisia häiriöitä tähän magneettikenttään, minkä vuoksi kompasseja on pidetty epäluotettavina sisätiloissa. Vasta viimeisen vuosikymmenen aikana on huomattu, että koska nämä häiriöt ovat ajallisesti pysyviä ja paikallisesti hyvin erottelevia, niistä voidaan muodostaa jokaiselle rakennukselle yksilöllinen häiriöihin perustuva magneettinen kartta, jota voidaan käyttää sisätiloissa paikantamiseen. Suurin osa tämänhetkisistä magneettikarttojen sovelluksista perustuu kartan käsin keräämiseen, mikä on sekä työlästä että tarjoaa mahdollisuuden inhimillisiin virheisiin. Tämä väitöstutkimus tarttuu ongelmaan laittamalla robotin hoitamaan kartoitustyön ja näyttää, että robotti pystyy itsenäisesti keräämään magneettisen kartan hyödyntäen pelkästään magnetometriä ja renkaiden antamia matkalukemia. Ratkaisu perustuu faktoroituun partikkelisuodattimeen (RBPF), joka approksimoi täsmällistä rekursiivista bayesilaista ratkaisua. Robotin keräämien karttojen tarkkuus mahdollistaa paikannuksen n. 10 senttimetrin tarkkuudella. Vähäisten sensori- ja muiden vaatimusten takia menetelmä soveltuu erityisen hyvin koti- ja parvirobotiikkaan, joissa hinta on usein ratkaiseva tekijä. Tutkimuksessa esitellään lisäksi uusia apumenetelmiä tehokkaaseen näytteistykseen ja epävarmuuden hallintaan. Näiden käyttöala ei rajoitu pelkästään magneettipaikannukseen- ja kartoitukseen. Robotiikan sovellusten lisäksi tutkimusta motivoi voimakkaasti kasvava tarve älylaitteissa toimivalle sisätilapaikannukselle. Tämä avaa uusia mahdollisuuksia paikannukselle ympäristöissä, joissa GPS ei perinteisesti toimi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Dichtl, Johann. „On 2D SLAM for Large Indoor Spaces - A Polygon-Based Solution“. Thesis, Ecole nationale supérieure Mines-Télécom Lille Douai, 2019. http://www.theses.fr/2019MTLD0006.

Der volle Inhalt der Quelle
Annotation:
Le SLAM d'espaces intérieurs est un sujet important en robotique. La majorité des solutions actuelles se basent sur une carte sous forme de grille 2D. Bien que permettant de réaliser des cartographies satisfaisantes, cette solution admet des limites liées à la quantité importante de mémoire qu'elle requiert. Dans cette thèse, nous introduisons PolySLAM un algorithme de SLAM qui permet de produire des cartes vectorielles 2D à base de polygones
Indoor SLAM and exploration is an important topic in robotics. Most solutions today work with a 2D grid representation as map model, both for the internal data format and for the output of the algorithm. While this is convenient in several ways, it also brings its own limitations, in particular because of the memory requirements of this map format. In this thesis we introduce PolyMap, a 2D map format aimed at indoor mapping, and PolySLAM, a SLAM algorithm that produces PolyMaps
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Rasines, Suárez Javier. „Gaussian process-assisted frontier exploration and indoor radio source localization for mobile robots“. Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-236062.

Der volle Inhalt der Quelle
Annotation:
Autonomous localization of a radio source is addressed, in the context of autonomous charging for drones in indoor environments. A radio beacon will be the only input used by the robot to navigate to an unknown charging station, at an unknown area. Previous proposed algorithms used frontier-based exploration and the measured RSS to compute the direction to the source. The use of Gaussian processes is studied to model the Radio Signal Strength (RSS) distribution and generate an estimation of the gradient. This gradient was also incorporated into a frontier exploration algorithm and was compared with the proposed algorithm. It was found that the usefulness of the Gaussian process model depended on the distribution of the RSS samples. If the robot had no prior samples of the RSS, then the gradient-assisted solution performed better. Instead, if the robot had some prior knowledge of the RSS distribution, then the Gaussian process model yields a better performance.
Autonom utforskning av en radiokälla behandlas, i samband med autonom laddning för drönare i inomhusmiljöer. En radiofyr kommer att vara den enda information som roboten använder för att navigera till en laddningsstation i ett okänt område. Tidigare föreslagna algoritmer använde gränsbaserad undersökning och den uppmätta RSS:en för att beräkna källans riktning. Användning av Gaussiska processer studeras för att modellera RSS-distributionen och generera en uppskattning av gradienten. Denna gradient införlivades också i en gränsutforskningsalgoritm och jämfördes med den föreslagna algoritmen. Det visade sig att användningen av den gaussiska processmodellen berodde på distributionen av RSS-proverna. Om roboten inte hade några tidigare prover av RSS, presterade den gradientassisterade lösningen bättre. Istället, om roboten hade några prover av RSS (till exempel om den utfört en annan uppgift på någon region i kartan), ger Gaussiska processmodellen bättre prestanda.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Persson, Lucas, und Sebastian Markström. „Indoor localization of hand-held Shopping Scanners“. Thesis, KTH, Data- och elektroteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208931.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates applicable indoor navigation systems for the next generation of hand-held shopping scanners, on behalf of the company Virtual Stores. The thesis research and review applicable indoor localization methods and ways to combine and evaluate received localization data in order to provide accurate navigation without introducing any other worn equipment for a potential user. Prototype navigation systems was proposed, developed and evaluated using a combination of carefully placed radio transmitters that was used to provide radio based localization methods using Bluetooth or UltraWide Band (UWB) and inertial sensors combined with a particle filter. The Bluetooth solution was deemed incapable of providing any accurate localization method while the prototype using a combination of UWB and inertial sensors proved promising solution with below 1m average error under optimal conditions or 2.0m average localization error in a more realistic environment. However, the system requires the surveyed area to provide 3 or more UWB transmitters in the line of sight of the UWB receiver of the user at every location facing any direction to provide accurate localization. The prototype also requires to be scaled up to provide localization to more than 1 radio transmitters at the time before being introduced to the Fast moving consumer goods market.
Denna avhandling undersöker tillämpliga inomhusnavigationssystem för nästa generations handhållna shopping terminaler, på uppdrag av företaget Virtual Stores. Avhandlingen undersöker och granskar tillämpliga inomhuslokaliseringsmetoder och sätt att kombinera och utvärdera mottagna lokaliseringsdata för att bistå med ackurat navigering utan att introducera någon ytterligare utrustning för en potentiell användare. Prototypnavigationssystem föreslogs, utvecklades och utvärderades användandes en kombination av väl utplacerade radiosändare användandes Bluetooth eller UltraWide Band (UWB) och tröghetssensorer i kombination med ett partikelfilter. Bluetooth-lösningen ansågs oförmögen att tillhandahålla någon exakt lokalisering medan prototypen som använde en kombination av UWB och tröghetssensorer visade sig vara en lovande lösnings med under 1m genomsnittligt fel under optimala förhållanden eller 2,0m genomsnittligt lokaliseringsfel i mer realistisk miljö. Systemet kräver emellertid att det undersökta området tillhandahåller 3 eller fler UWB-sändare inom synfältet för UWB-mottagaren hos användaren vid varje plats och riktning för att tillhandahålla ackurat lokalisering. Prototypen behöver skalas upp för att kunna bistå med lokalisering till mer än 1 radiomottagare innan den introduceras till detaljhandlen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Falomir, Llansola Zoe [Verfasser], Maria Teresa [Akademischer Betreuer] Escrig und Christian [Akademischer Betreuer] Freksa. „Qualitative Distances and Qualitative Description of Images for Indoor Scene Description and Recognition in Robotics / Zoe Falomir Llansola. Gutachter: Christian Freksa. Betreuer: Maria Teresa Escrig“. Bremen : Staats- und Universitätsbibliothek Bremen, 2011. http://d-nb.info/1072156628/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Shakeel, Amlaan. „Service robot for the visually impaired: Providing navigational assistance using Deep Learning“. Miami University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=miami1500647716257366.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Rudol, Piotr. „Increasing Autonomy of Unmanned Aircraft Systems Through the Use of Imaging Sensors“. Licentiate thesis, Linköpings universitet, UASTECH – Teknologier för autonoma obemannade flygande farkoster, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-71295.

Der volle Inhalt der Quelle
Annotation:
The range of missions performed by Unmanned Aircraft Systems (UAS) has been steadily growing in the past decades thanks to continued development in several disciplines. The goal of increasing the autonomy of UAS's is widening the range of tasks which can be carried out without, or with minimal, external help. This thesis presents methods for increasing specific aspects of autonomy of UAS's operating both in outdoor and indoor environments where cameras are used as the primary sensors. First, a method for fusing color and thermal images for object detection, geolocation and tracking for UAS's operating primarily outdoors is presented. Specifically, a method for building saliency maps where human body locations are marked as points of interest is described. Such maps can be used in emergency situations to increase the situational awareness of first responders or a robotic system itself. Additionally, the same method is applied to the problem of vehicle tracking. A generated stream of geographical locations of tracked vehicles increases situational awareness by allowing for qualitative reasoning about, for example, vehicles overtaking, entering or leaving crossings. Second, two approaches to the UAS indoor localization problem in the absence of GPS-based positioning are presented. Both use cameras as the main sensors and enable autonomous indoor ight and navigation. The first approach takes advantage of cooperation with a ground robot to provide a UAS with its localization information. The second approach uses marker-based visual pose estimation where all computations are done onboard a small-scale aircraft which additionally increases its autonomy by not relying on external computational power.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Duberg, Daniel. „Safe Navigation of a Tele-operated Unmanned Aerial Vehicle“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-221701.

Der volle Inhalt der Quelle
Annotation:
Unmanned Aerial Vehicles (UAVs) can navigate in indoor environments and through environments that are hazardous or hard to reach for humans. This makes them suitable for use in search and rescue missions and by emergency response and law enforcement to increase situational awareness. However, even for an experienced UAV tele-operator controlling the UAV in these situations without colliding into obstacles is a demanding and difficult task. This thesis presents a human-UAV interface along with a collision avoidance method, both optimized for a human tele-operator. The objective is to simplify the task of navigating a UAV in indoor environments. Evaluation of the system is done by testing it against a number of use cases and a user study. The results of this thesis is a collision avoidance method that is successful in protecting the UAV from obstacles while at the same time acknowledges the operator’s intentions.
Obemannad luftfarkoster (UAV:er) kan navigera i inomhusmiljöer och genom miljöer som är farliga eller svåra att nå för människor. Detta gör dem lämpliga för användning i sök- och räddningsuppdrag och av akutmottagning och rättsväsende genom ökad situationsmedvetenhet. Dock är det även för en erfaren UAV-teleoperatör krävande och svårt att kontrollera en UAV i dessa situationer utan att kollidera med hinder. Denna avhandling presenterar ett människa-UAV-gränssnitt tillsammans med en kollisionsundvikande metod, båda optimerade för en mänsklig teleoperatör. Målet är att förenkla uppgiften att navigera en UAV i inomhusmiljöer. Utvärdering av systemet görs genom att testa det mot ett antal användningsfall och en användarstudie. Resultatet av denna avhandling är en kollisionsundvikande metod som lyckas skydda UAV från hinder och samtidigt tar hänsyn till operatörens avsikter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Andrade-Cetto, Juan. „Environment learning for indoor mobile robots“. Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/6185.

Der volle Inhalt der Quelle
Annotation:
Aquesta tesi tracta el problema de l'aprenentatge automàtic d'entorns estructurats n robòtica mòbil. Particularment, l'extracció de característiques a partir dels senyals dels sensors, la construcció autònoma de mapes, i l'autolocalització de robots.
S'estudien els fonaments matemàtics necessaris per a l'extracció de característiques a partir d'imatges i registres d'un làser, els quals permeten la identificació unívoca dels elements de l'entorn. Els atributs extrets a partir del senyal d'un sol sensor poden ser insuficients quan es volen caracteritzar els elements de l'entorn de forma invariant; això es pot millorar combinant informació de múltiples fonts. Es presenta un nou algorisme per la fusió d'informació complementaria extreta de dos mòduls de visió de baix nivell.
Aquesta fusió d'informació produeix descripcions més completes dels objectes de l'entorn, els quals poden ser seguits i apresos dins el context de la robòtica mòbil. Les variacions en les condicions d'il·luminació i les oclusions fan que l'associació de dades en visió per computador sigui una tasca difícil de completar.
Tot i això, l'ús de restriccions geomètriques i fotogramètriques permeten reduir la cerca de correspondències entre imatges successives; i al centrar l'atenció en un reduït nombre de característiques, aquestes poden ser seguides en imatges successives, simplificant així el problema d'associació de dades. Es recalquen les tècniques de la geometria de múltiples vistes que són rellevants pel còmput d'una estimació inicial de la posició dels elements de l'entorn, el que permet la reconstrucció del moviment del robot entre imatges successives; situació desitjable quan no existeix odometria o quan las seves lectures són poc fiables.
Quan els elements de l'entorn s'han extret i identificat, la segona part del problema consisteix en utilitzar aquestes observacions tant per estimar la posició del robot, com per refinar l'estimació dels mateixos elements de l'entorn. El moviment del robot i les lectures dels sensors es consideren com dos processos estocàstics, i el problema es tracta des del punt de vista de la teoria d'estimació, on el soroll inherent als sensors i al moviment del robot es consideren com a seqüències aleatòries.
El principal inconvenient existent en l'ús de tècniques d'estimació pel còmput concurrent de la posició del robot i la construcció d'un mapa, és que fins ara s'ha considerat la seva aplicació únicament en entorns estàtics, i que el seu ús en situacions més realistes ofereix poca robustesa. Es proposa un conjunt de funcions per avaluar la qualitat temporal de les observacions per tal de resoldre les situacions en que les observacions dels elements de l'entorn no siguin consistents en el temps. Es mostra com la utilització d'aquestes proves de qualitat temporal conjuntament amb les proves de compatibilitat espacial milloren els resultats quan es fen servir amb un mètode d'estimació òptima de la construcció concurrent de mapes i l'autolocalització de robots.
La idea principal consisteix en emprar un històric dels errors en l'associació de les dades per calcular la possibilitat d'incórrer en nous errors d'associació; i excloure del mapa aquells elements dels quals les observacions no siguin consistents.
Es posa especial atenció en el fet que l'eliminació dels elements inconsistents del mapa no violi les propietats dels algorismes de construcció concurrent de mapes i autolocalització descrits en la literatura; és a dir, convergència assimptòtica i correlació completa.
Aquesta tesi proporciona també un profund anàlisi del model de construcció concurrent de mapes i autolocalització totalment correlat des d'un punt de vista de la teoria de control de sistemes. Partint del fet que el filtre de Kalman no és més que un estimador òptim, s'analitzen les implicacions de tenir un vector d'estats que es revisa a partir de mesures totalment correladas.
Es revela de manera teòrica i amb experiments les limitacions d'utilitzar un enfocament per la construcció concurrent de mapes i l'autolocalització a partir de mesures totalment correladas.
El fet de tenir un model parcialment observable inhibeix la reconstrucció total de l'espai d'estats, produint tant mateix una estimació de la posició dels elements de l'entorn que depèn en tot cas de les observacions inicials, i que no garanteix la convergència a una matriu de covariància definida positivament.
D'altra banda, el fet de tenir un vector d'estats parcialment controlable fa que, desprès d'un reduït nombre d'iteracions el filtre cregui tenir una estimació perfecta de l'estat dels elements de l'entorn; amb els corresponents guanys de Kalman convergint a zero. Per tant, desprès d'un reduït nombre d'iteracions del filtre, els innovacions no s'utilitzen més. Es mostra com reduir els efectes de la correlació total i de la controlabilitat parcial. A més a més, suposant que el filtre de Kalman és un observador òptim per a la reconstrucció dels estats, és pertinent construir un regulador òptim que permeti conduir el robot el més a prop possible a una trajectòria desitjada durant la construcció d'un mapa. Es mostra com la dualitat existent entre l'observabilitat i la controlabilitat es pot fer servir en el disseny d'aquest regulador òptim.
Qualsevol algorisme de construcció concurrent de mapes i autolocalització de robots mòbils que s'ha d'usar en un entorn real ha de ser capaç de relacionar les observacions i els seus corresponents elements del mapa de manera expedita. Algunes de les proves de compatibilitat de les observacions són costoses des del punt de vista de la seva complexitat computacional, i la seva aplicació s'ha de dissenyar amb especial atenció. Es comenten els costos computacionals de les diferents proves de compatibilitat entre observacions; així com altres característiques desitjables de l'estructura de dades que es fa servir per a la construcció del mapa. A més a més es proposen una sèrie de tasques que han de realitzar-se durant l'associació de dades. Començant per les proves de compatibilitat amb un model bàsic dels elements del mapa, i continuant amb la reducció de l'espai de cerca quan es generen hipòtesis d'associació, així com les proves espacial i temporal d'associació de dades.
El treball que es presenta en aquesta tesi proposa noves tècniques en àrees de l'enginyera i ciències computacionals, que van des de nous algorismes per la visió per computador, a idees novells de la construcció concurrent de mapes i l'autolocalització de robots mòbils. Les contribucions principals són la proposta d'una nova tècnica per la fusió de dades visuals; la formulació d'un nou algorisme per la construcció concurrent de mapes i l'autolocalització de robots que considera la qualitat temporal dels elements del mapa; nous resultats teòrics en el nivell de reconstrucció possible quan es construeixen mapes a partir d'observacions totalment correladas; i les tècniques necessàries per pal·liar els efectes de l'observabilitat i la controlabilitat parcials, així com els efectes de les no linealitats en la solució del problema de construcció concurrent de mapes i de l'autolocalització.
Esta tesis aborda el problema del aprendizaje automático de entornos estructurados en robótica móvil. Particularmente, la extracción de características a partir de las se nales de los censores, la construcción autónoma de mapas, y la autolocalización de robots.
Se estudian los fundamentos matemáticos necesarios para la extracción de características a partir de imágenes y registros de un láser, las cuales permiten la identificación unívoca de los elementos del entorno. Los atributos extraídos a partir de la se nal de un solo sensor pueden ser insuficientes a la hora de caracterizar los elementos del entorno de forma invariante; lo que conlleva a la combinación de información de múltiples fuentes. Se presenta un nuevo algoritmo para la fusión de información complementaria extraída de dos módulos de visión de bajo nivel. Esta fusión de información produce descripciones más completas de los objetos presentes en el entorno, los cuales pueden ser seguidos y aprendidos en el contexto de la robótica móvil.
Las variaciones en las condiciones de iluminación y las oclusiones hacen que la asociación de datos en visión por computador sea una tarea difícil de llevar a cabo. Sin embargo, el uso de restricciones geométricas y fotogramétricas permiten reducir la búsqueda de correspondencias entre imágenes; y al centrar la atención en un reducido número de características, estas pueden ser seguidas en imágenes sucesivas, simplificando así el problema de asociación de datos. Se hace hincapié en las técnicas de la geometría de múltiples vistas relevantes para el cómputo de una estimación inicial de la posición de los elementos del entorno, lo cual permite la reconstrucción del movimiento
del robot entre imágenes sucesivas; situación deseable cuando se carece de odometría o cuando sus lecturas son poco fiables.
Una vez que los elementos del entorno han sido extraídos e identificados, la segunda parte del problema consiste en usar estas observaciones tanto para estimar la posición del robot, como para refinar la estimación de los mismos elementos del entorno. El movimiento del robot y las lecturas de los sensores se consideran como dos procesos estocásticos, y el problema se aborda desde el punto de vista de la teoría de estimación, en donde el ruido inherente a los sensores y al movimiento del robot se consideran como secuencias aleatorias.
La principal desventaja existente en el uso de técnicas de estimación para el cómputo concurrente de la posición del robot y la construcción de un mapa, es que hasta ahora se ha considerado su uso en entornos estáticos únicamente, y que su aplicación en situaciones más realistas carece de robustez.
Se propone un conjunto de funciones para evaluar la calidad temporal de las observaciones con el fin de solventar aquellas situaciones en que las observaciones de los elementos del entorno no sean consistentes en el tiempo.
Se muestra como el uso de estas pruebas de calidad temporal junto con las pruebas de compatibilidad espacial existentes mejora los resultados al usar un método de estimación óptima para la construcción concurrente de mapas y la autolocalización de robots. La idea principal consiste en usar un histórico
de los errores en la asociación de datos para el cómputo de la posibilidad de incurrir en nuevos errores de asociación; y eliminar del mapa aquellos elementos cuyas observaciones no sean consistentes.
Se presta especial atención a que la eliminación de elementos inconsistentes del mapa no viole las propiedades de los algoritmos de construcción concurrente de mapas y autolocalización descritos en la literatura; es decir, convergencia asintótica y correlación completa.
Esta tesis proporciona a su vez un análisis en profundidad del modelo de construcción concurrente de mapas y autolocalización totalmente correlado desde un punto de vista de la teoría de control de sistemas. Partiendo del hecho de que el filtro de Kalman no es otra cosa que un estimador óptimo, se analizan las implicaciones de tener un vector de estados que se revisa a partir de mediciones totalmente correladas. Se revela de forma teórica y con experimentos las limitaciones de usar un enfoque para la construcción concurrente de mapas y autolocalización a partir de mediciones totalmente correladas.
El hecho de tener un modelo parcialmente observable inhibe la reconstrucción total del espacio de estados, produciendo a su vez una estimación de la posición de los elementos del entorno que dependerá en todo caso de las observaciones iniciales, y que no garantiza la convergencia a una matriz de covarianza positivamente definida. Por otro lado, el hecho de tener un vector de estados parcialmente controlable, produce después de un reducido número de iteraciones que el filtro crea tener una estimación perfecta del estado de los elementos del entorno; con sus correspondientes ganancias de Kalman convergiendo a cero. Esto es, después de un peque no número de iteraciones del filtro, las innovaciones no se usan. Se muestra como reducir los efectos de la correlación total y la controlabilidad parcial. Además, dado que el filtro de Kalman es un observador óptimo para la reconstrucción de los estados, es pertinente construir un regulador óptimo que permita conducir al robot lo más cerca posible de una trayectoria deseada durante la construcción de un mapa. Se muestra como la dualidad existente entre la observabilidad y la controlabilidad se puede emplear en el diseño de este regulador óptimo.
Cualquier algoritmo de construcción concurrente de mapas y autolocalización de robots móviles que deba funcionar en un entorno real deberá ser capaz de relacionar las observaciones y sus correspondientes elementos del mapa de manera expedita. Algunas de las pruebas de compatibilidad de las observaciones son caras desde el punto de vista de su complejidad computacional, y su aplicación debe diseñarse con riguroso cuidado. Se comentan los costes computacionales de las distintas pruebas de compatibilidad entre observaciones; así como otras características deseadas de la estructura de datos elegida para la construcción del mapa. Además, se propone una serie de tareas que debe llevarse a cabo durante la asociación de datos. Partiendo por las pruebas de compatibilidad con un modelo básico de los elementos del mapa, y continuando con la reducción del espacio de búsqueda al generar hipótesis de asociación, así como las pruebas espacial y temporal de asociación de datos.
El trabajo que se presenta en esta tesis propone nuevas técnicas en áreas de la ingeniería y las ciencias computacionales, que van desde nuevos algoritmos de visión por computador, a ideas noveles en la construcción concurrente de mapas y la autolocalización de robots móviles. Las contribuciones principales son la propuesta de una nueva técnica para la fusión de datos visuales; la formulación de un nuevo algoritmo para la construcción concurrente de mapas y autolocalización de robots que toma en cuenta la calidad temporal de los elementos del mapa; nuevos resultados teóricos en el grado de reconstrucción posible al construir mapas a partir de observaciones totalmente correladas; y las técnicas necesarias para paliar los efectos de la observabilidad y controlabilidad parciales, así como los efectos de las no linealidades en la solución del problema de construcción concurrente de mapas y autolocalización.
This thesis focuses on the various aspects of autonomous environment learning for indoor service robots. Particularly, on landmark extraction from sensor data, autonomous map building, and robot localization.
To univocally identify landmarks from sensor data, we study several landmark representations, and the mathematical foundation necessary to extract the features that build them from images and laser range data. The features extracted from just one sensor may not suce in the invariant characterization of landmarks and objects, pushing for the combination of information from multiple sources. We present a new algorithm that fuses complementary information from two low level vision modules into coherent object models that can be tracked and learned in a mobile robotics context. Illumination conditions and occlusions are the most prominent artifacts
that hinder data association in computer vision. By using photogrammetric and geometric constraints we restrict the search for landmark matches in successive images, and by locking our interest in one or a set of landmarks in the scene, we track those landmarks along successive frames, reducing considerably the data association problem. We concentrate on those tools from the geometry of multiple views that are relevant to the computation of initial landmark location estimates for coarse motion recovery; a desirable characteristic when odometry is not available or is highly unreliable.
Once landmarks are accurately extracted and identied, the second part of the problem is to use these observations for the localization of the robot, as well as the renement of the landmark location estimates. We consider robot motion and sensor observations as stochastic processes, and treat the problem from an estimation theoretic point of view, dealing with noise by using probabilistic methods.
The main drawback we encounter is that current estimation techniques have been devised for static environments, and that they lack robustness in more realistic situations. To aid in those situations in which landmark observations might not be consistent in time, we propose a new set of temporal landmark quality functions, and show how by incorporating these functions in the data association tests, the overall estimation-theoretic approach to map building and localization is improved. The basic idea consists on using the history of data association mismatches for the computation of the likelihood of future data association, together with the spatial compatibility tests already available.
Special attention is paid in that the removal of spurious landmarks from the map does not violate the basic convergence properties of the localization and map building algorithms already described in the literature; namely, asymptotic convergence and full correlation.
The thesis also gives an in depth analysis of the fully correlated model to localization and map building from a control systems theory point of view. Considering the fact that the Kalman .lter is nothing else but an optimal observer, we analyze the implications of having a state vector that is being revised by fully correlated noise measurements. We end up revealing
theoretically and with experiments the strong limitations of using a fully correlated noise driven estimation theoretic approach to map building and localization in relation to the total number of landmarks used.
Partial observability hinders full reconstructibility of the state space, making the .nal map estimate dependant on the initial observations, and does not guarantee convergence to a positive de nite covariance matrix. Partial controllability on the other hand, makes the .lter beleive after a number of iterations, that it has accurate estimates of the landmark states, with their corresponding Kalman gains converging to zero. That is, after a few steps, innovations are useless. We show how to palliate the e.ects of full correlation
and partial controllability. Furthermore, given that the Kalman .lter is an optimal observer for the reconstruction of fully correlated states; it seems pertinent to build an optimal regulator in order to keep the robot as close as possible to a desired motion path when building a map. We show also how the duality between observability and controllability can be exploited in designing such an optimal regulator.
Any map building and localization algorithm for mobile robotics that is to work in real time must be able to relate observations and model matches in an expeditious way. Some of the landmark compatibility tests are computationally expensive, and their application has to be carefully designed. We touch upon the time complexity issues of the various landmark compatibility tests used, and also on the desirable properties of our chosen map data structure.
Furthermore, we propose a series of tasks that must be handled when dealing with landmark data association. From model compatibility tests, to search space reduction and hypothesis formation, to the actual association of observations and models.
The work presented in this thesis spans several areas of engineering and computer science, from new computer vision algorithms, to novel ideas in mobile robot localization and map building. The key contributions are the proposal of a new technique to fuse visual data; the formulation of new algorithms to concurrent localization and map building that take into account temporal landmark quality; new theoretical results on the degree of reconstruction possible when building maps from fully correlated observations; and the necessary techniques to palliate partial observability, partial controllability, and the nonlinear e.ects when solving the simultaneous localization and map building problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

McCoig, Kenneth. „A MOBILE ROBOTIC COMPUTING PLATFORM FOR THREE-DIMENSIONAL INDOOR MAPPI“. Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2372.

Der volle Inhalt der Quelle
Annotation:
There are several industries exploring solutions to quickly and accurately digitize unexplored indoor environments, into useable three-dimensional databases. Unfortunately, there are inherent challenges to the indoor mapping process such as, scanning limitations and environment complexity, which require a specific application of tools to map an environment precisely with low cost and high speed. This thesis successfully demonstrates the design and implementation of a low cost mobile robotic computing platform with laser scanner, for quickly mapping with high resolution, urban and/or indoor environments using a gyro-enhanced orientation sensor and selectable levels of detail. In addition, a low cost alternative solution to three-dimensional laser scanning is presented, via a standard two-dimensional SICK proximity laser scanner mounted to a custom servo motor mount and controlled by external microcontroller. A software system to control the robot is presented, which incorporates and adheres to widely accepted software engineering guidelines and principles. An analysis of the overall system, including robot specifications, system capabilities, and justification for certain design decisions, are described in detail. Results of various open source software algorithms, as it applies to scan data and image data, are also compared; including evaluation of data correlation and registration techniques. In addition, laser scanner mapping tests, specifications, and capabilities are presented and analyzed. A sample design for converting the final scanned point cloud data to a database is presented and assessed. The results suggest the overall project yields a relatively high degree of accuracy and lower cost over most other existing systems surveyed, as well as, the potential for application of the system in other fields. The results also discuss thoughts for possible future research work.
M.S.Cp.E.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Dag, Antymos. „Autonomous Indoor Navigation System for Mobile Robots“. Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129419.

Der volle Inhalt der Quelle
Annotation:
With an increasing need for greater traffic safety, there is an increasing demand for means by which solutions to the traffic safety problem can be studied. The purpose of this thesis is to investigate the feasibility of using an autonomous indoor navigation system as a component in a demonstration system for studying cooperative vehicular scenarios. Our method involves developing and evaluating such a navigation system. Our navigation system uses a pre-existing localization system based on passive RFID, odometry and a particle filter. The localization system is used to estimate the robot pose, which is used to calculate a trajectory to the goal. A control system with a feedback loop is used to control the robot actuators and to drive the robot to the goal.   The results of our evaluation tests show that the system generally fulfills the performance requirements stated for the tests. There is however some uncertainty about the consistency of its performance. Results did not indicate that this was caused by the choice of localization techniques. The conclusion is that an autonomous navigation system using the aforementioned localization techniques is plausible for use in a demonstration system. However, we suggest that the system is further tested and evaluated before it is used with applications where accuracy is prioritized.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Shen, Jiali. „Visual navigation algorithms for indoor service robots“. Thesis, University of Essex, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446554.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Islam, Rasel Rashedul. „Obstacle Detection for Indoor Navigation of Mobile Robots“. Master's thesis, Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-225279.

Der volle Inhalt der Quelle
Annotation:
Obstacle detection is one of the major focus area on image processing. For mobile robots, obstacle detection and collision avoidance is a notorious problem and is still a part of the modern research. There are already a lot of research have been done so far for obstacle detection and collision avoidance. This thesis evaluates the existing various well-known methods and sensors for collision free navigation of the mobile robot. For moving obstacle detection purpose the frame difference approach is adopted. Robotino® is used as the mobile robot platform and additionally Microsoft Kinect is used as 3D sensor. For getting information from the environment about obstacle, the 9-built-in distance sensor of Robotino® and 3D depth image data from the Kinect is used. The combination is done to get the maximum advantages for obstacles information. The detection of moving object in front of the sensor is a major interest of this work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Hennig, Matthias, Henri Kirmse und Klaus Janschek. „Global Localization of an Indoor Mobile Robot with a single Base Station“. Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-83687.

Der volle Inhalt der Quelle
Annotation:
The navigation tasks in advanced home robotic applications incorporating reliable revisiting strategies are dependent on very low cost but nevertheless rather accurate localization systems. In this paper a localization system based on the principle of trilateration is described. The proposed system uses only a single small base station, but achieves accuracies comparable to systems using spread beacons and it performs sufficiently for map building. Thus it is a standalone system and needs no odometry or other auxiliary sensors. Furthermore a new approach for the problem of the reliably detection of areas without direct line of sight is presented. The described system is very low cost and it is designed for use in indoor service robotics. The paper gives an overview on the system concept and special design solutions and proves the possible performances with experimental results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Qutteineh, Jafar. „Investigation of Cooperative SLAM for Low Cost Indoor Robots“. Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-31931.

Der volle Inhalt der Quelle
Annotation:
In robotics, SLAM is the problem of dynamically building a map while simultaneously using it to localize the robot. Most SLAM solutions rely on laser ranger devices or vision sensors (cameras). This work studies the possibility of extending current established SLAM solutions to low cost robotic platforms with few low quality short range distance sensors (Infrared and Sonar) and weak odometery information. This work starts by studying the performance of a low cost robotic platform `DiddyBorg' and build models for sensors and odometry to be used in SLAM implementation. Next, three SLAM solutions are implemented, tested and compared both in real environment and under simulation. The first two solutions are based on the well-established EKF-SLAM and RBPF-SLAM while the third is a custom simplified solution that is proposed in this work. The results show that RBPF-SLAM performed poorly compared to EKF-SLAM due to the limited sensory input affecting the quality of particles weighing scheme. The results also shows that while the sparsity of the sensors is a limiting factor on SLAM quality in general, the limited range of the sensors is a determinant factor on the overall convergence of SLAM. Finally, a simple map coding and merging algorithm is presented for evaluation of multi-robot collaborative SLAM, the solution enables a group of robots to collaborate on the SLAM tasks without any priori knowledge of the relative locations of robots.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Quebe, Stephen C. „Modeling, Parameter Estimation, and Navigation of Indoor Quadrotor Robots“. BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3565.

Der volle Inhalt der Quelle
Annotation:
This thesis discusses topics relevant to indoor unmanned quadrotor navigation and control. These topics include: quadrotor modeling, sensor modeling, quadrotor parameter estimation, sensor calibration, quadrotor state estimation using onboard sensors, and cooperative GPS navigation. Modeling the quadrotor, sensor modeling, and parameter estimation are essential components for quadrotor navigation and control. This thesis investigates prior work and organizes a wide variety of models and calibration methods that enable indoor unmanned quadrotor flight. Quadrotor parameter estimation using a particle filter is a contribution that extends current research in the area. This contribution is novel in that it applies the particle filter specifically to quadrotor parameter estimation as opposed to quadrotor state estimation. The advantages and disadvantages of such an approach are explained. Quadrotor state estimation using onboard sensors and without the aid of GPS is also discussed, as well as quadrotor pose estimation using the Extended Kalman Filter with an inertial measurement unit and simulated 3D camera updates. This is done using two measurement updates: one from the inertial measurement unit and one from the simulated 3D camera. Finally, we demonstrate that when GPS lock cannot be obtained by an unmanned vehicle individually. A group of cooperative robots with pose estimates to one anther can exploit partial GPS information to improve global position estimates for individuals in the group. This method is advantageous for robots that need to navigate in environments where signals from GPS satellites are partially obscured or jammed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Newberry, John Christopher, und john newberry@rmit edu au. „On the Micro-Precision Robotic Drilling of Aerospace Components“. RMIT University. Aerospace, Mechanical and Manufacturing Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080513.162719.

Der volle Inhalt der Quelle
Annotation:
This dissertation describes research concerned with the use of advanced measurement techniques for the control of robotic manufacturing processes. The work focused on improving the state of technology in the precision robotic machining of components within the aerospace manufacturing industry within Australia. Specific contributions are the development of schemes for the use of advanced measurement equipment in precision machining operations and to apply flexible manufacturing techniques in automated manufacturing. The outcome of the research enables placement of a robotic end effector to drill a hole with a positional accuracy of 300 micron, employing an Indoor Global Positioning System for control of the drilling process. This can be accomplished within a working area of 35 square metres where the robot system and/or part positions may be varied dynamically during the process. Large aerospace structures are capable of flexing during manufacturing operations due to their physical size and low modulus of rigidity. This research work provided a framework for determining the appropriate type of automation and metrology systems needed for dynamic control suited to the precision drilling of holes in large aerospace components.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Asthana, Ambika. „Software architecture for controlling an indoor hovering robot from a remote host“. Access electronically, 2007. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20080905.112058/index.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Keller, Brian Matthew. „Multi-spectral System for Autonomous Robotic Location of Fires Indoors“. Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23101.

Der volle Inhalt der Quelle
Annotation:
Autonomous firefighting platforms are being developed to support firefighters.  One aspect of this is location of a fire inside a structure.  A multi-spectral sensor platform and fire location algorithm was developed in this research to locate a fire indoors autonomously. The multi-spectral sensor platform used a long wavelength infrared (LWIR) camera and ultraviolet (UV) sensor.  The LWIR camera was chosen for its ability to see through smoke, while the UV sensor was selected for its ability to discriminate between fires and non-fire hot objects.  The fire location algorithm by radiation emission (FLARE) developed in this research used the multi-spectral sensor data to provide the robot heading angle toward the fire. The system was tested in a large-scale structural fire facility.  A series of 20 different scenarios were used to evaluate the robustness of the system including different fuel types, structural features, non-fire hot objects, and potential robot positions within the enclosure.  This demonstrated that FLARE could direct a robot towards the fire regardless of these variables. Directional fire discrimination was added to the platform by limiting the field of view of the UV sensor to that of the LWIR cameras.  Three methods were evaluated to limit the field of view of a UV sensor. These included angled plate housing, bulb cover, and slit opening housing methods. The slit opening housing method was recommended for ease of implementation and size required to limit the field of view of the sensor to the desired value.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Taylor, Trevor. „Mapping of indoor environments by robots using low-cost vision sensors“. Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/26282/1/Trevor_Taylor_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Taylor, Trevor. „Mapping of indoor environments by robots using low-cost vision sensors“. Queensland University of Technology, 2009. http://eprints.qut.edu.au/26282/.

Der volle Inhalt der Quelle
Annotation:
For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Gomes, Pedro Miguel de Barros. „LADAR based mapping and obstacle detection system for service robots“. Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/4589.

Der volle Inhalt der Quelle
Annotation:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
When travelling in unfamiliar environments, a mobile service robot needs to acquire information about his surroundings in order to detect and avoid obstacles and arrive safely at his destination. This dissertation presents a solution for the problem of mapping and obstacle detection in indoor/outdoor structured3 environments, with particular application on service robots equipped with a LADAR. Since this system was designed for structured environments, offroad terrains are outside the scope of this work. Also, the use of any a priori knowledge about LADAR’s surroundings is discarded, i.e. the developed mapping and obstacle detection system works in unknown environments. In this solution, it is assumed that the robot, which carries the LADAR and the mapping and obstacle detection system, is based on a planar surface which is considered to be the ground plane. The LADAR is positioned in a way suitable for a three dimensional world and an AHRS sensor is used to increase the robustness of the system to variations on robot’s attitude, which, in turn, can cause false positives on obstacle detection. The results from the experimental tests conducted in real environments through the incorporation on a physical robot suggest that the developed solution can be a good option for service robots driving in indoor/outdoor structured environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Ruangpayoongsak, Niramon. „Development of autonomous features and indoor localization techniques for car-like mobile robots“. [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=982279469.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Yan, Yu Pei. „A path planning algorithm for the mobile robot in the indoor and dynamic environment based on the optimized RRT algorithm“. Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951594.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Jiang, Lixing [Verfasser]. „Object Recognition and Saliency Detection for Indoor Robots using RGB-D Sensors / Lixing Jiang“. München : Verlag Dr. Hut, 2016. http://d-nb.info/1106593723/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Bista, Suman Raj. „Indoor navigation of mobile robots based on visual memory and image-based visual servoing“. Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S074/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse présente une méthode de navigation par asservissement visuel à l'aide d'une mémoire d'images. Le processus de navigation est issu d'informations d'images 2D sans utiliser aucune connaissance 3D. L'environnement est représenté par un ensemble d'images de référence avec chevauchements, qui sont automatiquement sélectionnés au cours d'une phase d'apprentissage préalable. Ces images de référence définissent le chemin à suivre au cours de la navigation. La commutation des images de référence au cours de la navigation est faite en comparant l'image acquise avec les images de référence à proximité. Basé sur les images actuelles et deux images de référence suivantes, la vitesse de rotation d'un robot mobile est calculée en vertu d'une loi du commandé par asservissement visuel basé image. Tout d'abord, nous avons utilisé l'image entière comme caractéristique, où l'information mutuelle entre les images de référence et la vue actuelle est exploitée. Ensuite, nous avons utilisé des segments de droite pour la navigation en intérieur, où nous avons montré que ces segments sont de meilleurs caractéristiques en environnement intérieur structuré. Enfin, nous avons combiné les segments de droite avec des points pour augmenter l'application de la méthode à une large gamme de scénarios d'intérieur pour des mouvements sans heurt. La navigation en temps réel avec un robot mobile équipé d'une caméra perspective embarquée a été réalisée. Les résultats obtenus confirment la viabilité de notre approche et vérifient qu'une cartographie et une localisation précise ne sont pas nécessaire pour une navigation intérieure utile
This thesis presents a method for appearance-based navigation from an image memory by Image-Based Visual Servoing (IBVS). The entire navigation process is based on 2D image information without using any 3D information at all. The environment is represented by a set of reference images with overlapping landmarks, which are selected automatically during a prior learning phase. These reference images define the path to follow during the navigation. The switching of reference images during navigation is done by comparing the current acquired image with nearby reference images. Based on the current image and two succeeding key images, the rotational velocity of a mobile robot is computed under IBVS control law. First, we have used the entire image as a fea-ture, where mutual information between reference images and the current view is exploited. Then, we have used line segments for the indoor navigation, where we have shown that line segments are better features for the structured indoor environment. Finally, we combined line segments with point-based features for increasing the application of the method to a wide range of indoor scenarios with smooth motion. Real-time navigation with a Pioneer 3DX equipped with an on-board perspective camera has been performed in indoor environment. The obtained results confirm the viability of our approach and verify that accurate mapping and localization are not mandatory for a useful indoor navigation
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Nazemzadeh, Payam. „Indoor Localization of Wheeled Robots using Multi-sensor Data Fusion with Event-based Measurements“. Doctoral thesis, Università degli studi di Trento, 2016. https://hdl.handle.net/11572/367712.

Der volle Inhalt der Quelle
Annotation:
In the era in which the robots have started to live and work everywhere and in close contact with humans, they should accurately know their own location at any time to be able to move and perform safely. In particular, large and crowded indoor environments are challenging scenarios for robots' accurate and robust localization. The theory and the results presented in this dissertation intend to address the crucial issue of wheeled robots indoor localization by proposing some novel solutions in three complementary ways, i.e. improving robots self-localization through data fusion, adopting collaborative localization (e.g. using the position information from other robots) and finally optimizing the placement of landmarks in the environment once the detection range of the chosen sensors is known. As far as the first subject is concerned, a robot should be able to localize itself in a given reference frame. This problem is studied in detail to achieve a proper and affordable technique for self-localization, regardless of specific environmental features. The proposed solution relies on the integration of relative and absolute position measurements. The former are based on odometry and on an inertial measurement unit. The absolute position and heading data instead are measured sporadically anytime some landmark spread in the environment is detected. Due to the event-based nature of such measurement data, the robot can work autonomously most of time, even if accuracy degrades. Of course, in order to keep positioning uncertainty bounded, it is important that absolute and relative position data are fused properly. For this reason, four different techniques are analyzed and compared in the dissertation. Once the local kinematic state of each robot is estimated, a group of them moving in the same environment and able to detect and communicate with one another can also collaborate to share their position information to refine self-localization results. In the dissertation, it will be shown that this approach can provide some benefits, although performances strongly depend on the metrological features of the adopted sensors as well as on the communication range. Finally, as far as the problem optimal landmark placement is concerned, this is addressed by suggesting a novel and easy-to-use geometrical criterion to maximize the distance between the landmarks deployed over a triangular lattice grid, while ensuring that the absolute position measurement sensors can always detect at least one landmark.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Nazemzadeh, Payam. „Indoor Localization of Wheeled Robots using Multi-sensor Data Fusion with Event-based Measurements“. Doctoral thesis, University of Trento, 2016. http://eprints-phd.biblio.unitn.it/1867/1/PhD_Thesis_PayamNazemzadeh.pdf.

Der volle Inhalt der Quelle
Annotation:
In the era in which the robots have started to live and work everywhere and in close contact with humans, they should accurately know their own location at any time to be able to move and perform safely. In particular, large and crowded indoor environments are challenging scenarios for robots' accurate and robust localization. The theory and the results presented in this dissertation intend to address the crucial issue of wheeled robots indoor localization by proposing some novel solutions in three complementary ways, i.e. improving robots self-localization through data fusion, adopting collaborative localization (e.g. using the position information from other robots) and finally optimizing the placement of landmarks in the environment once the detection range of the chosen sensors is known. As far as the first subject is concerned, a robot should be able to localize itself in a given reference frame. This problem is studied in detail to achieve a proper and affordable technique for self-localization, regardless of specific environmental features. The proposed solution relies on the integration of relative and absolute position measurements. The former are based on odometry and on an inertial measurement unit. The absolute position and heading data instead are measured sporadically anytime some landmark spread in the environment is detected. Due to the event-based nature of such measurement data, the robot can work autonomously most of time, even if accuracy degrades. Of course, in order to keep positioning uncertainty bounded, it is important that absolute and relative position data are fused properly. For this reason, four different techniques are analyzed and compared in the dissertation. Once the local kinematic state of each robot is estimated, a group of them moving in the same environment and able to detect and communicate with one another can also collaborate to share their position information to refine self-localization results. In the dissertation, it will be shown that this approach can provide some benefits, although performances strongly depend on the metrological features of the adopted sensors as well as on the communication range. Finally, as far as the problem optimal landmark placement is concerned, this is addressed by suggesting a novel and easy-to-use geometrical criterion to maximize the distance between the landmarks deployed over a triangular lattice grid, while ensuring that the absolute position measurement sensors can always detect at least one landmark.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Blomqvist, Anneli. „Millimeter Wave Radar as Navigation Sensor on Robotic Vacuum Cleaner“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288146.

Der volle Inhalt der Quelle
Annotation:
Does millimeter-wave radar have the potential to be the navigational instrument of a robotic vacuum cleaner in a home? Electrolux robotic vacuum cleaner is currently using a light sensor to navigate through the home while cleaning. Recently Texas Instruments released a new mmWave radar sensor, operating in the frequency range 60-64 GHz. This study aims to answer if the mmWave radar sensor is useful for indoor navigation. The study tests the sensor on accuracy and resolution of angles and distances in ranges relevant to indoor navigation. It tests if various objects made out of plastic, fabric, paper, metal, and wood are detectable by the sensor. At last, it tests what the sensor can see when it is moving while measuring. The radar sensor can localize the robot, but the ability to detect objects around the robot is limited. The sensor’s absolute accuracy is within 3° for the majority of angles and around 1dm for most distances above 0.5 m. The resolution for a displacement of one object is 1°, respectively 5 cm, and two objects must be located at least 14° or 15 cm apart from each other to be recognized. Future tasks include removing noise due to antenna coupling to improve reflections from within 0.5 meter and figure out the best way to move around the sensor to improve the resolution.
Har radar med millimetervågor förutsättningar att vara navigationsutrustning för en robotdammsugare i ett hem? Electrolux robotdammsugare använder för närvarande en ljussensor för att navigera genom hemmet medan den städar. Nyligen släppte Texas Instruments en ny radarsensor med vågor i frekvensområdet 60-64 GHz. Denna studie syftar till att svara om radarsensorn är användbar för inomhusnavigering. Studien testar sensorn med avseende på noggrannhet och upplösning av vinklar och avstånd i områden som är relevanta för inomhusnavigering. Den testar om olika föremål tillverkade av plast, tyg, papper, metall och trä kan detekteras av sensorn. Slutligen testas vad sensorn kan se om den rör sig medan den mäter. Radarsensorn kan positionera roboten, men hinderdetektering omkring roboten är begränsad. För det mesta ligger sensorns absoluta noggrannhet inom 3° för vinklar och omkring 1dm för avstånd över 0,5 m. Upplösningen för en förflyttning av ett objekt är 1° respektive 5 cm, och två objekt måste placeras minst 14° eller 15 cm ifrån varandra för att båda kunna upptäckas. Kommande utmaningar är att ta bort antennstörningar som ger sämre reflektioner inom 0,5 meter och ta reda på det bästa sättet att förflytta sensorn för att förbättra upplösningen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Magrin, Carlos Eduardo Setenareski. „Fusão de sensores utilizando técnica de fingerprint kNN e ponderação de atributos para localização indoor de um robô móvel“. reponame:Repositório Institucional da UFPR, 2015. http://hdl.handle.net/1884/42967.

Der volle Inhalt der Quelle
Annotation:
Orientador : Prof. Dr. Eduardo Todt
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 15/12/2015
Inclui referências : f. 97-103
Resumo: Robôs moveis dependem do conhecimento sobre o ambiente para se locomoverem, assim sensores sao utilizados para detectar objetos, medir distancias de paredes e estimar a distancia percorrida. Porem, sensores estao sujeitos a ruídos de leitura e defeitos, e para conseguir relevantes valores de medicao do ambiente e possível utilizar o processo de fusao de sensores, combinando a leitura de dois ou mais sensores. Neste trabalho e proposto um metodo de fusao hierárquica de sensores, com tecnicas de fingerprint kNN e ponderacão de atributos para a resolucao do problema de localizaçao de uma plataforma robótica móvel, utilizando octogono de sonares, bússola digital e intensidade de sinal de uma rede wireless para determinar a localizacão de um robô movel autônomo. Os resultados obtidos sugerem diferentes ponderaçcãoes de atributos para cada tipo de ambiente e tamanhos de grids no mapa, em síntese o metodo de fusao hierárquica de sensores determina " onde esta o robô?" , independente da sua origem, orientacao ou posiçao anterior em ambientes indoor, utilizando sensores de baixo custo.
Abstract: Mobile robots depends on knowledge of the environment to move around, so sensors are used to detect objects, measuring distances walls and estimate the distance traveled. However, reading sensors are subject to noise and defects, for achieving relevant environmental measurement values it is possible to use the sensor fusion process, combining two or more sensors. In this paper we propose a hierarchical sensors fusion method with fingerprint kNN techniques and features weighting to solve the problem of location of a mobile robotic platform, using sonar octagon, digital compass and a wireless network signal strength to determine the location of an autonomous mobile robot. The results suggest different features weighting for each type of environment and sizes of grids on the map, in synthesis the hierarquical sensors fusion method define ''where the robot is?" , whatever their origin, orientation or previous position in indoor environments, using low-cost sensors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

CANCHUMUNI, SMITH WASHINGTON ARAUCO. „PROBABILISTIC SIMULTANEOUS LOCALIZATION AND MAPPING OF MOBILE ROBOTS IN INDOOR ENVIRONMENTS WITH A LASER RANGE FINDER“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23357@1.

Der volle Inhalt der Quelle
Annotation:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Os Robôs Móveis são cada vez mais inteligentes, para que eles tenham a capacidade de semover livremente no interior deumambiente, evitando obstáculos e sem assistência de um ser humano, precisam possuir um conhecimento prévio do ambiente e de sua localização. Nessa situação, o robô precisa construir um mapa local de seu ambiente durante a execução de sua missão e, simultaneamente, determinar sua localização. Este problema é conhecido como Mapeamento e Localização Simultâneas (SLAM). As soluções típicas para o problema de SLAM utilizam principalmente dois tipos de sensores: (i) odômetros, que fornecem informações de movimento do robô móvel e (ii) sensores de distância, que proporcionam informação da percepção do ambiente. Neste trabalho, apresenta-se uma solução probabilistica para o problema SLAM usando o algoritmo DP-SLAM puramente baseado em medidas de um LRF (Laser Range Finder), com foco em ambientes internos estruturados. Considera-se que o robô móvel está equipado com um único sensor 2DLRF, sem nenhuma informação de odometria, a qual é substituída pela informação obtida da máxima sobreposição de duas leituras consecutivas do sensor LRF, mediante algoritmos de Correspondência de Varreduras (Scan Matching). O algoritmo de Correspondência de Varreduras usado realiza uma Transformada de Distribuições Normais (NDT) para aproximar uma função de sobreposição. Para melhorar o desempenho deste algoritmo e lidar com o LRF de baixo custo, uma reamostragem dos pontos das leituras fornecidas pelo LRF é utilizada, a qual preserva uma maior densidade de pontos da varredura nos locais onde haja características importantes do ambiente. A sobreposição entre duas leituras é otimizada fazendo o uso do algoritmo de Evolução Diferencial (ED). Durante o desenvolvimento deste trabalho, o robô móvel iRobot Create, equipado com o sensor LRF Hokuyo URG-04lx, foi utilizado para coletar dados reais de ambientes internos, e diversos mapas 2D gerados são apresentados como resultados.
The robot to have the ability to move within an environment without the assistance of a human being, it is required to have a knowledge of the environment and its location within it at the same time. In many robotic applications, it is not possible to have an a priori map of the environment. In that situation, the robot needs to build a local map of its environment while executing its mission and, simultaneously, determine its location. A typical solution for the Simultaneous Localization and Mapping (SLAM) problem primarily uses two types of sensors: i) an odometer that provides information of the robot’s movement and ii) a range measurement that provides perception of the environment. In this work, a solution for the SLAM problem is presented using a DP-SLAM algorithm purely based on laser readings, focused on structured indoor environments. It considers that the mobile robot only uses a single 2D Laser Range Finder (LRF), and the odometry sensor is replaced by the information obtained from the overlapping of two consecutive laser scans. The Normal Distributions Transform (NDT) algorithm of the scan matching is used to approximate a function of the map overlapping. To improve the performance of this algorithm and deal with low-quality range data from a compact LRF, a scan point resampling is used to preserve a higher point density of high information features from the scan. An evolution differential algorithm is presented to optimize the overlapping process of two scans. During the development of this work, the mobile robot iRobot Create, assembled with one LRF Hokuyo URG-04LX, is used to collect real data in several indoor environments, generating 2D maps presented as results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Stefansson, Thor. „3D obstacle avoidance for drones using a realistic sensor setup“. Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233807.

Der volle Inhalt der Quelle
Annotation:
Obstacle avoidance is a well researched area, however most of the works only consider a 2D environment. Drones can move in three dimensions. It is therefore of interest to develop a system that ensures safe flight in these three dimensions. Obstacle avoidance is of highest importance for drones if they are intended to work autonomously and around humans, since drones are often fragile and have fast moving propellers that can hurt humans. This project is based on the obstacle restriction algorithm in 3D, and uses OctoMap to conveniently use the sensor data from multiple sensors simultaneously and to deal with their limited field of view. The results show that the system is able to avoid obstacles in 3D.
Hinderundvikande är ett utforskat område, dock för det mesta har forskningen fokuserat på 2D-miljöer. Eftersom drönare kan röra sig i tre dimensioner är det intressant att utveckla ett system som garanterar säker rörelse i 3D. Hinderundvikande är viktigt för drönare om de ska arbeta autonomt runt människor, eftersom drönare ofta är ömtåliga och har snabba propellrar som kan skada människor. Det här projektet är baserat på Hinderrestriktionsmetoden (ORM), och använder OctoMap för att använda information från många sensorer samtidigt och för att hantera deras begränsade synfält. Resultatet visar att systemet kan undvika hinder i 3D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Ghorpade, Vijaya Kumar. „3D Semantic SLAM of Indoor Environment with Single Depth Sensor“. Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC085/document.

Der volle Inhalt der Quelle
Annotation:
Pour agir de manière autonome et intelligente dans un environnement, un robot mobile doit disposer de cartes. Une carte contient les informations spatiales sur l’environnement. La géométrie 3D ainsi connue par le robot est utilisée non seulement pour éviter la collision avec des obstacles, mais aussi pour se localiser et pour planifier des déplacements. Les robots de prochaine génération ont besoin de davantage de capacités que de simples cartographies et d’une localisation pour coexister avec nous. La quintessence du robot humanoïde de service devra disposer de la capacité de voir comme les humains, de reconnaître, classer, interpréter la scène et exécuter les tâches de manière quasi-anthropomorphique. Par conséquent, augmenter les caractéristiques des cartes du robot à l’aide d’attributs sémiologiques à la façon des humains, afin de préciser les types de pièces, d’objets et leur aménagement spatial, est considéré comme un plus pour la robotique d’industrie et de services à venir. Une carte sémantique enrichit une carte générale avec les informations sur les entités, les fonctionnalités ou les événements qui sont situés dans l’espace. Quelques approches ont été proposées pour résoudre le problème de la cartographie sémantique en exploitant des scanners lasers ou des capteurs de temps de vol RGB-D, mais ce sujet est encore dans sa phase naissante. Dans cette thèse, une tentative de reconstruction sémantisée d’environnement d’intérieur en utilisant une caméra temps de vol qui ne délivre que des informations de profondeur est proposée. Les caméras temps de vol ont modifié le domaine de l’imagerie tridimensionnelle discrète. Elles ont dépassé les scanners traditionnels en termes de rapidité d’acquisition des données, de simplicité fonctionnement et de prix. Ces capteurs de profondeur sont destinés à occuper plus d’importance dans les futures applications robotiques. Après un bref aperçu des approches les plus récentes pour résoudre le sujet de la cartographie sémantique, en particulier en environnement intérieur. Ensuite, la calibration de la caméra a été étudiée ainsi que la nature de ses bruits. La suppression du bruit dans les données issues du capteur est menée. L’acquisition d’une collection d’images de points 3D en environnement intérieur a été réalisée. La séquence d’images ainsi acquise a alimenté un algorithme de SLAM pour reconstruire l’environnement visité. La performance du système SLAM est évaluée à partir des poses estimées en utilisant une nouvelle métrique qui est basée sur la prise en compte du contexte. L’extraction des surfaces planes est réalisée sur la carte reconstruite à partir des nuages de points en utilisant la transformation de Hough. Une interprétation sémantique de l’environnement reconstruit est réalisée. L’annotation de la scène avec informations sémantiques se déroule sur deux niveaux : l’un effectue la détection de grandes surfaces planes et procède ensuite en les classant en tant que porte, mur ou plafond; l’autre niveau de sémantisation opère au niveau des objets et traite de la reconnaissance des objets dans une scène donnée. A partir de l’élaboration d’une signature de forme invariante à la pose et en passant par une phase d’apprentissage exploitant cette signature, une interprétation de la scène contenant des objets connus et inconnus, en présence ou non d’occultations, est obtenue. Les jeux de données ont été mis à la disposition du public de la recherche universitaire
Intelligent autonomous actions in an ordinary environment by a mobile robot require maps. A map holds the spatial information about the environment and gives the 3D geometry of the surrounding of the robot to not only avoid collision with complex obstacles, but also selflocalization and for task planning. However, in the future, service and personal robots will prevail and need arises for the robot to interact with the environment in addition to localize and navigate. This interaction demands the next generation robots to understand, interpret its environment and perform tasks in human-centric form. A simple map of the environment is far from being sufficient for the robots to co-exist and assist humans in the future. Human beings effortlessly make map and interact with environment, and it is trivial task for them. However, for robots these frivolous tasks are complex conundrums. Layering the semantic information on regular geometric maps is the leap that helps an ordinary mobile robot to be a more intelligent autonomous system. A semantic map augments a general map with the information about entities, i.e., objects, functionalities, or events, that are located in the space. The inclusion of semantics in the map enhances the robot’s spatial knowledge representation and improves its performance in managing complex tasks and human interaction. Many approaches have been proposed to address the semantic SLAM problem with laser scanners and RGB-D time-of-flight sensors, but it is still in its nascent phase. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Time-of-flight cameras have dramatically changed the field of range imaging, and surpassed the traditional scanners in terms of rapid acquisition of data, simplicity and price. And it is believed that these depth sensors will be ubiquitous in future robotic applications. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Starting with a brief motivation in the first chapter for semantic stance in normal maps, the state-of-the-art methods are discussed in the second chapter. Before using the camera for data acquisition, the noise characteristics of it has been studied meticulously, and properly calibrated. The novel noise filtering algorithm developed in the process, helps to get clean data for better scan matching and SLAM. The quality of the SLAM process is evaluated using a context-based similarity score metric, which has been specifically designed for the type of acquisition parameters and the data which have been used. Abstracting semantic layer on the reconstructed point cloud from SLAM has been done in two stages. In large-scale higher-level semantic interpretation, the prominent surfaces in the indoor environment are extracted and recognized, they include surfaces like walls, door, ceiling, clutter. However, in indoor single scene object-level semantic interpretation, a single 2.5D scene from the camera is parsed and the objects, surfaces are recognized. The object recognition is achieved using a novel shape signature based on probability distribution of 3D keypoints that are most stable and repeatable. The classification of prominent surfaces and single scene semantic interpretation is done using supervised machine learning and deep learning systems. To this end, the object dataset and SLAM data are also made publicly available for academic research
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Rojas, Castro Dalia Marcela. „The RHIZOME architecture : a hybrid neurobehavioral control architecture for autonomous vision-based indoor robot navigation“. Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS001/document.

Der volle Inhalt der Quelle
Annotation:
Les travaux décrits dans cette thèse apportent une contribution au problème de la navigation autonome de robots mobiles dans un contexte de vision indoor. Il s’agit de chercher à concilier les avantages des différents paradigmes d’architecture de contrôle et des stratégies de navigation. Ainsi, nous proposons l’architecture RHIZOME (Robotic Hybrid Indoor-Zone Operational ModulE) : une architecture unique de contrôle robotique mettant en synergie ces différentes approches en s’appuyant sur un système neuronal. Les interactions du robot avec son environnement ainsi que les multiples connexions neuronales permettent à l’ensemble du système de s’adapter aux conditions de navigation. L’architecture RHIZOME proposée combine les avantages des approches comportementales (e.g. rapidité de réaction face à des problèmes imprévus dans un contexte d’environnement dynamique), et ceux des approches délibératives qui tirent profit d’une connaissance a priori de l’environnement. Cependant, cette connaissance est uniquement exploitée pour corroborer les informations perçues visuellement avec celles embarquées. Elle est représentée par une séquence de symboles artificiels de navigation guidant le robot vers sa destination finale. Cette séquence est présentée au robot soit sous la forme d’une liste de paramètres, soit sous la forme d’un plan. Dans ce dernier cas, le robot doit extraire lui-même la séquence de symboles à suivre grâce à une chaine de traitements d’images. Ainsi, afin de prendre la bonne décision lors de sa navigation, le robot traite l’ensemble de l’information perçue, la compare en temps réel avec l’information a priori apportée ou extraite, et réagit en conséquence. Lorsque certains symboles de navigation ne sont plus présents dans l’environnement de navigation, l’architecture RHIZOME construit de nouveaux lieux de référence à partir des panoramas extraits de ces lieux. Ainsi, le robot, lors de phases exploratoires, peut s’appuyer sur ces nouvelles informations pour atteindre sa destination finale, et surmonter des situations imprévues. Nous avons mis en place notre architecture sur le robot humanoïde NAO. Les résultats expérimentaux obtenus lors d’une navigation indoor, dans des scenarios à la fois déterministes et stochastiques, montrent la faisabilité et la robustesse de cette approche unifiée
The work described in this dissertation is a contribution to the problem of autonomous indoor vision-based mobile robot navigation, which is still a vast ongoing research topic. It addresses it by trying to conciliate all differences found among the state-of-the-art control architecture paradigms and navigation strategies. Hence, the author proposes the RHIZOME architecture (Robotic Hybrid Indoor-Zone Operational ModulE) : a unique robotic control architecture capable of creating a synergy of different approaches by merging them into a neural system. The interactions of the robot with its environment and the multiple neural connections allow the whole system to adapt to navigation conditions. The RHIZOME architecture preserves all the advantages of behavior-based architectures such as rapid responses to unforeseen problems in dynamic environments while combining it with the a priori knowledge of the world used indeliberative architectures. However, this knowledge is used to only corroborate the dynamic visual perception information and embedded knowledge, instead of directly controlling the actions of the robot as most hybrid architectures do. The information is represented by a sequence of artificial navigation signs leading to the final destination that are expected to be found in the navigation path. Such sequence is provided to the robot either by means of a program command or by enabling it to extract itself the sequence from a floor plan. This latter implies the execution of a floor plan analysis process. Consequently, in order to take the right decision during navigation, the robot processes both set of information, compares them in real time and reacts accordingly. When navigation signs are not present in the navigation environment as expected, the RHIZOME architecture builds new reference places from landmark constellations, which are extracted from these places and learns them. Thus, during navigation, the robot can use this new information to achieve its final destination by overcoming unforeseen situations.The overall architecture has been implemented on the NAO humanoid robot. Real-time experimental results during indoor navigation under both, deterministic and stochastic scenarios show the feasibility and robustness of the proposed unified approach
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Frese, Udo. „An O(log n) algorithm for simultaneous localization and mapping of mobile robots in indoor environments Ein O(log n)-Algorithmus für gleichzeitige Lokalisierung und Kartierung mobiler Roboter in Innenräumen /“. [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972029516.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

SAVARESE, FRANCESCO. „Data Fusion Methods and Algorithms in the Context of Autonomous Systems - A path planning algorithms analysis and optimization exploiting fused data“. Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2752655.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Alves, Raulcézar Maximiano Figueira. „Coordenação, localização e navegação para robôs de serviço em ambientes internos“. Universidade Federal de Uberlândia, 2017. http://dx.doi.org/10.14393/ufu.te.2017.21.

Der volle Inhalt der Quelle
Annotation:
A Robótica tem iniciado uma transição de Robótica Industrial para Robótica de Serviço, movendo-se em direção as necessidades diárias dos seres humanos. Para realizar essa transição, robôs necessitam de mais autonomia para executar tarefas em espaços dinâmicos ocupados por humanos, diferente dos ambientes controlados das fábricas. Nesta tese, é investigado um problema no qual um time de robôs completamente autônomos deve visitar certos locais em um ambiente interno usado por humanos a fim de executar algum tipo de tarefa. Este problema está relacionado a três importantes questões da Robótica e Inteligência Artificial (IA), que são: coordenação, localização e navegação. Para coordenar as visitas nos locais desejados, um escalonamento deve ser realizado para encontrar as rotas para os robôs. Tal escalonamento deve minimizar a distância total viajada pelo time e também balancear as rotas. Este problema pode ser modelado como sendo uma instância do Problema dos Múltiplos Caixeiros Viajantes (PMCV). Como este problema é classificado como NP-Difícil, é proposto o uso de algoritmos aproximados para encontrar soluções satisfatórias para o problema. Uma vez que as rotas estão computadas, os robôs necessitam de se localizar no ambiente para que eles tenham certeza de que estão visitando os lugares corretos. Muitas técnicas de localização não são muito precisas em ambientes internos devido a diferentes tipos de ruídos. Desta forma, é proposto uma combinação de duas delas. Nesta abordagem, um algoritmo de localização WiFi rastreia a localização global do robô, enquanto um algoritmo de localização Kinect estima sua posição atual dentro da área delimitada pela localização global. Depois de visitar um dado local de sua rota, o robô deve navegar em direção ao próximo. A navegação em ambientes internos ocupados por humanos é uma tarefa difícil, uma vez que muitos objetos móveis e dinâmicos podem ser encontrados no caminho. Para isso, o robô deve possuir controles reativos para evitar colidir com objetos dinâmicos, como pessoas, enquanto ele navega. Além disso, objetos móveis, como mobílias, são passíveis de serem movidos frequentemente, o que muda o mapa utilizado para planejar o caminho do robô. Para resolver estes problemas, é proposto um algoritmo de desvio de obstáculos e um planejador dinâmico de caminho para ambientes internos ocupados por humanos. Desta forma, esta tese contribui com uma série de algoritmos para os problemas de coordenação, localização e navegação. São introduzidos: Algoritmos Genéticos (AGs) multi-objetivo para resolver o Problema dos Múltiplos Caixeiros Viajantes, abordagens de localização que utilizam a técnica de Filtro de Partículas (FP) com dispositivos Kinect e WiFi, um Sistema Híbrido Inteligente (SHI) baseado em Lógica Fuzzy (LF) e Redes Neuronais Artificiais (RNA) para desvio de obstáculos e uma adaptação do algoritmo D*Lite que permite o robô replanejar caminhos de forma eficiente e requisitar auxílio humano se necessário. Todos os algoritmos são avaliados em robôs reais e simuladores, demonstrando seus desempenhos em resolver os problemas abordados nesta tese.
Robotics has started the transition from industrial into service robotics, moving closer towards humans daily needs. To accomplish this transition, robots require more autonomy to perform tasks in dynamic spaces occupied by humans, different from well controlled environments of factory floors. In this thesis, we investigate a problem in which a team of completely autonomous robots needs to visit certain locations in an indoor human environment in order to perform some kind of task. This problem is related to three important issues of Robotics and \ac{AI}, namely: coordination, localization and navigation. To coordinate the visits in the desired locations, a scheduling must be performed to find routes for the robots. Such scheduling needs to minimize the total distance traveled by the team and also to balance the routes. We model this problem as being an instance of the multiple Traveling Salesmen Problem (mTSP). Since it is classified as NP-Hard, we propose the use of approximation algorithms to find reasonable solutions to the problem. Once the routes are computed, the robots need to localize themselves in the environment so they can be sure that they are visiting the right places. Many localization techniques are not very accurate in indoor human environments due to different types of noise. Therefore, we propose the combination of two of them. In such approach, a WiFi localization algorithm tracks the global location of the robot while a Kinect localization algorithm estimates its current pose on that area. After visiting a given location of its route, the robot must navigate towards the next one. Navigation in indoor human environments is a challenging task as many moving and movable objects can be found in the way. The robot should be equipped with a reactive controller to avoid colliding with moving objects, like people, while it is navigating. Also, movable objects, such as furniture, are likely to be moved frequently, which changes the map used to plan the robot's path. To tackle these problems, we introduce an obstacle avoidance algorithm and a dynamic path planner for navigation in indoor human environments. We contribute a series of algorithms for the problems of coordination, localization, and navigation. We introduce: multi-objective Genetic Algorithms (GAs) to solve the mTSP, localization approaches that use Particle Filters (PFs) with Kinect and WiFi devices, a Hybrid Intelligent System (HIS) based on Fuzzy Logic (FL) and Artificial Neural Network (ANN) for obstacle avoidance, and an adaptation to the D*Lite algorithm that enables robots to replan paths efficiently and also ask for human assistance if it is necessary. All algorithms are evaluated on real robots and simulators, demonstrating their performances to solve the problems addressed in this thesis.
Tese (Doutorado)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie