Siga este enlace para ver otros tipos de publicaciones sobre el tema: Autonomous Robots.

Tesis sobre el tema "Autonomous Robots"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Autonomous Robots".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Nipper, Nathan James. "Robotic balance through autonomous oscillator control and the dynamic inclinometer". [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/anp1586/NathanNipperThesis.PDF.

Texto completo
Resumen
Thesis (M.E.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains vii, 54 p.; also contains graphics. Vita. Includes bibliographical references (p. 53).
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Christensen, Anders Lyhne. "Fault detection in autonomous robots". Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210508.

Texto completo
Resumen
In this dissertation, we study two new approaches to fault detection for autonomous robots. The first approach involves the synthesis of software components that give a robot the capacity to detect faults which occur in itself. Our hypothesis is that hardware faults change the flow of sensory data and the actions performed by the control program. By detecting these changes, the presence of faults can be inferred. In order to test our hypothesis, we collect data in three different tasks performed by real robots. During a number of training runs, we record sensory data from the robots both while they are operating normally and after a fault has been injected. We use back-propagation neural networks to synthesize fault detection components based on the data collected in the training runs. We evaluate the performance of the trained fault detectors in terms of the number of false positives and the time it takes to detect a fault.

The results show that good fault detectors can be obtained. We extend the set of possible faults and go on to show that a single fault detector can be trained to detect several faults in both a robot's sensors and actuators. We show that fault detectors can be synthesized that are robust to variations in the task. Finally, we show how a fault detector can be trained to allow one robot to detect faults that occur in another robot.

The second approach involves the use of firefly-inspired synchronization to allow the presence of faulty robots to be determined by other non-faulty robots in a swarm robotic system. We take inspiration from the synchronized flashing behavior observed in some species of fireflies. Each robot flashes by lighting up its on-board red LEDs and neighboring robots are driven to flash in synchrony. The robots always interpret the absence of flashing by a particular robot as an indication that the robot has a fault. A faulty robot can stop flashing periodically for one of two reasons. The fault itself can render the robot unable to flash periodically.

Alternatively, the faulty robot might be able to detect the fault itself using endogenous fault detection and decide to stop flashing.

Thus, catastrophic faults in a robot can be directly detected by its peers, while the presence of less serious faults can be detected by the faulty robot itself, and actively communicated to neighboring robots. We explore the performance of the proposed algorithm both on a real world swarm robotic system and in simulation. We show that failed robots are detected correctly and in a timely manner, and we show that a system composed of robots with simulated self-repair capabilities can survive relatively high failure rates.

We conclude that i) fault injection and learning can give robots the capacity to detect faults that occur in themselves, and that ii) firefly-inspired synchronization can enable robots in a swarm robotic system to detect and communicate faults.


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Garratt, Matthew A. "Biologically inspired vision and control for an autonomous flying vehicle /". View thesis entry in Australian Digital Theses Program, 2007. http://thesis.anu.edu.au/public/adt-ANU20090116.154822/index.html.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hawley, John. "Hierarchical task allocation in robotic exploration /". Online version of thesis, 2009. http://hdl.handle.net/1850/10650.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Keepence, B. S. "Navigation of autonomous mobile robots". Thesis, Cardiff University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304921.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sá, André Filipe Marques Alves de. "Navigation of autonomous mobile robots". Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/23832.

Texto completo
Resumen
Mestrado em Engenharia Eletrónica e Telecomunicações
Automação, na mais simples das designações, é a arte de criar vida na máquina, possibilitando certas ações sem controlo directo por parte de um utilizador. Esta área de estudo permite que certas atividades que consideramos aborrecidas ou perigosas possam ser executadas por máquinas. Nesta tese, um estudo do estado da arte no campo de robôs móveis e inteligentes foi realizado, apresentando um focus especial em algoritmos de navegação baseados em procura e amostragem. Uma simulação foi desenvolvida, na qual um modelo do robô Wiserobot foi criado, utilizado como ambiente de teste um ed cio conhecido no campo da robótica, o laboratório da Willow Garage. Nesta simulação foram realizados testes aos algoritmos explorados anteriormente, nomeadamente Dijkstra, PRM e RRT. Para testar os algoritmos por amostragem, um plug-in foi desenvolvido para utilizar a Open Motion Planning Library para avaliar resultados dos mesmos. Por fim, código foi desenvolvido, usando e tendo por base bibliotecas existentes no ROS, de modo a dar ao nosso modelo do robô capacidades de navegação no ambiente simulado, inicialmente estático seguido de testes com objectos não declarados. Os resultados dos vários planeadores foram comparados para avaliar a prestação nos casos de testes definidos, utilizando métricas escolhidas previamente.
Automation, in the simplest of designations, is the art of creating life in the machine, allowing the performance of certain actions without the need of direct control by an user. This area of study allows for certain activities that we deem as tedious or dangerous to be executed by machines. In this thesis, a study of the state of the art in the eld of mobile and autonomous robotics is made, focusing in navigation algorithms based on search and sampling. A simulation was developed, in which a model of the robot was created, to be used with an environment well know by roboticist, Willow Garage. In this simulation, tests were made to the algorithms explored earlier, namely Dijkstra, PRM and RRT. To test multiple samplebased planners, a plug-in was developed to use the Open Motion Planning Library for benchmarking purposes. Finally code is developed, based and using existing ROS packages, to give a model cargo robot navigation capabilities in a simulated indoor environment, initially static then with undeclared obstacles. The results were compared from multiple planners to evaluate the performance in the test cases de ned, using pre-established metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Tay, Junyun. "Autonomous Animation of Humanoid Robots". Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/838.

Texto completo
Resumen
Gestures and other body movements of humanoid robots can be used to convey meanings which are extracted from an input signal, such as speech or music. For example, the humanoid robot waves its arm to say goodbye or nods its head to dance to the beats of the music. This thesis investigates how to autonomously animate a real humanoid robot given an input signal. This thesis addresses five core challenges, namely: Representation of motions, Mappings between meanings and motions, Selection of relevant motions, Synchronization of motion sequences to the input signal, and Stability of the motion sequences (R-M-S3). We define parameterized motions that allow a large variation of whole body motions to be generated from a small core motion library and synchronization of the motions to different input signals. To assign meanings to motions, we represent meanings using labels and map motions to labels autonomously using motion features. We also examine different metrics to determine similar motions so that a new motion is mapped to existing labels of the most similar motion. We explain how we select relevant motions using labels, synchronize the motion sequence to the input signal, and consider the audience’s preferences. We contribute an algorithm that determines the stability of a motion sequence. We also define the term relative stability, where the stability of one motion sequence is compared to other motion sequences. We contribute an algorithm to determine the most stable motion sequence so that the humanoid robot animates continuously without interruptions. We demonstrate our work with two input signals – music and speech, where a humanoid robot autonomously dances to any piece of music using the beats and emotions of the music and also autonomously gestures according to its speech. We describe how we use our solutions to R-M-S3, and present a complete algorithm that captures the meanings of the input signal and weighs the selection of the best sequence using two criteria: audience feedback and stability. Our approach and algorithms are general to autonomously animate humanoid robots, and we use a real NAO humanoid robot and in simulation as an example.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Loetzsch, Martin. "Lexicon formation in autonomous robots". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2015. http://dx.doi.org/10.18452/17121.

Texto completo
Resumen
"Die Bedeutung eines Wortes ist sein Gebrauch in der Sprache". Ludwig Wittgenstein führte diese Idee in der ersten Hälfte des 20. Jahrhunderts in die Philosophie ein und in verwandten Disziplinen wie der Psychologie und Linguistik setzte sich vor allem in den letzten Jahrzehnten die Ansicht durch, dass natürliche Sprache ein dynamisches System arbiträrer und kulturell gelernter Konventionen ist. Forscher um Luc Steels übertrugen diesen Sprachbegriff seit Ende der 90er Jahre auf das Gebiet der Künstlichen Intelligenz, indem sie zunächst Software-Agenten und später Robotern mittels sogenannter Sprachspiele gemeinsame Kommunikationssysteme bilden liessen, ohne dass Agenten im Voraus mit linguistischem und konzeptionellen Wissen ausgestattet werden. Die vorliegende Arbeit knüpft an diese Forschung an und untersucht vertiefend die Selbstorganisation von geteiltem lexikalischen Wissen in humanoiden Robotern. Zentral ist dabei das Konzept der "referential uncertainty", d.h. die Schwierigkeit, die Bedeutung eines bisher unbekannten Wortes aus dem Kontext zu erschliessen. Ausgehend von sehr einfachen Modellen der Lexikonbildung untersucht die Arbeit zunächst in einer simulierten Umgebung und später mit physikalischen Robotern systematisch, wie zunehmende Komplexität kommunikativer Interaktionen komplexere Lernmodelle und Repräsentationen erfordert. Ein Ergebnis der Evaluierung der Modelle hinsichtlich Robustheit und Übertragbarkeit auf Interaktionszenarien mit Robotern ist, dass die in der Literatur vorwiegenden selektionistischen Ansätze schlecht skalieren und mit der zusätzlichen Herausforderung einer Verankerung in visuellen Perzeptionen echter Roboter nicht zurecht kommen. Davon ausgehend wird ein alternatives Modell vorgestellt.
"The meaning of a word is its use in the language". In the first half of the 20th century Ludwig Wittgenstein introduced this idea into philosophy and especially in the last few decades, related disciplines such as psychology and linguistics started embracing the view that that natural language is a dynamic system of arbitrary and culturally learnt conventions. From the end of the nineties on, researchers around Luc Steels transferred this notion of communication to the field of artificial intelligence by letting software agents and later robots play so-called language games in order to self-organize communication systems without requiring prior linguistic or conceptual knowledge. Continuing and advancing that research, the work presented in this thesis investigates lexicon formation in humanoid robots, i.e. the emergence of shared lexical knowledge in populations of robotic agents. Central to this is the concept of referential uncertainty, which is the difficulty of guessing a previously unknown word from the context. First in a simulated environments and later with physical robots, this work starts from very simple lexicon formation models and then systematically analyzes how an increasing complexity in communicative interactions leads to an increasing complexity of representations and learning mechanisms. We evaluate lexicon formation models with respect to their robustness, scaling and their applicability to robotic interaction scenarios and one result of this work is that the predominating approaches in the literature do not scale well and are not able to cope with the challenges stemming from grounding words in the real-world perceptions of physical robots. In order to overcome these limitations, we present an alternative lexicon formation model and evaluate its performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Haberbusch, Matthew Gavin. "Autonomous Skills for Remote Robotic Assembly". Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1588112797847939.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Orebäck, Anders. "A component framework for autonomous mobile robots". Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-50.

Texto completo
Resumen

The major problem of robotics research today is that there is a barrier to entry into robotics research. Robot system software is complex and a researcher that wishes to concentrate on one particular problem often needs to learn about details, dependencies and intricacies of the complete system. This is because a robot system needs several different modules that need to communicate and execute in parallel.

Today there is not much controlled comparisons of algorithms and solutions for a given task, which is the standard scientific method of other sciences. There is also very little sharing between groups and projects, requiring code to be written from scratch over and over again.

This thesis proposes a general framework for robotics. By examining successful systems and architectures of past and present, yields a number of key properties. Some of these are ease of use, modularity, portability and efficiency. Even though there is much consensus on that the hybrid deliberate/reactive is the best architectural model that the community has produced so far, a framework should not stipulate a specific architecture. Instead the framework should enable the building of different architectures. Such a scheme implies that the modules are seen as common peers and not divided into clients and servers or forced into a set layering.

Using a standardized middleware such as CORBA, efficient communication can be carried out between different platforms and languages. Middleware also provides network transparency which is valuable in distributed systems. Component-based Software Engineering (CBSE) is an approach that could solve many of the aforementioned problems. It enforces modularity which helps to manage complexity. Components can be developed in isolation, since algorithms are encapsulated in components where only the interfaces need to be known by other users. A complete system can be created by assembling components from different sources.

Comparisons and sharing can greatly benefit from CBSE. A component-based framework called ORCA has been implemented with the following characteristics. All communication is carried out be either of three communication patterns, query, send and push. Communication is done using CORBA, although most of the CORBA code is hidden for the developer and can in the future be replaced by other mechanisms. Objects are transported between components in the form of the CORBA valuetype.

A component model is specified that among other things include support for a state-machine. This also handles initialization and sets up communication. Configuration is achieved by the presence of an XML-file per component. A hardware abstraction scheme is specified that basically route the communication patterns right down to the hardware level.

The framework has been verified by the implementation of a number of working systems.

Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Kelsey, Jed M. "AUTONOMOUS SOCCER-PLAYING ROBOTS: A SENIOR DESIGN PROJECT". International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608518.

Texto completo
Resumen
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This paper describes the experiences and final design of one team in a senior design competition to build a soccer-playing robot. Each robot was required to operate autonomously under the remote control of a dedicated host computer via a wireless link. Each team designed and constructed a robot and wrote its control software. Certain components were made available to all teams. These components included wireless transmitters and receivers, microcontrollers, overhead cameras, image processing boards, and desktop computers. This paper describes the team’s hardware and software designs, problems they encountered, and lessons learned.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Michael, Andrew Mario. "Circle formation algorithm for autonomous agents with local sensing /". Online version of thesis, 2004. http://hdl.handle.net/1850/12143.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Nguyen, Hai Dai. "Constructing mobile manipulation behaviors using expert interfaces and autonomous robot learning". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50206.

Texto completo
Resumen
With current state-of-the-art approaches, development of a single mobile manipulation capability can be a labor-intensive process that presents an impediment to the creation of general purpose household robots. At the same time, we expect that involving a larger community of non-roboticists can accelerate the creation of new novel behaviors. We introduce the use of a software authoring environment called ROS Commander (ROSCo) allowing end-users to create, refine, and reuse robot behaviors with complexity similar to those currently created by roboticists. Akin to Photoshop, which provides end-users with interfaces for advanced computer vision algorithms, our environment provides interfaces to mobile manipulation algorithmic building blocks that can be combined and configured to suit the demands of new tasks and their variations. As our system can be more demanding of users than alternatives such as using kinesthetic guidance or learning from demonstration, we performed a user study with 11 able-bodied participants and one person with quadriplegia to determine whether computer literate non-roboticists will be able to learn to use our tool. In our study, all participants were able to successfully construct functional behaviors after being trained. Furthermore, participants were able to produce behaviors that demonstrated a variety of creative manipulation strategies, showing the power of enabling end-users to author robot behaviors. Additionally, we introduce how using autonomous robot learning, where the robot captures its own training data, can complement human authoring of behaviors by freeing users from the repetitive task of capturing data for learning. By taking advantage of the robot's embodiment, our method creates classifiers that predict using visual appearances 3D locations on home mechanisms where user constructed behaviors will succeed. With active learning, we show that such classifiers can be learned using a small number of examples. We also show that this learning system works with behaviors constructed by non-roboticists in our user study. As far as we know, this is the first instance of perception learning with behaviors not hand-crafted by roboticists.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Santos, Vasco Pedro dos Anjos e. "DSAAR: distributed software architecture for autonomous robots". Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/1913.

Texto completo
Resumen
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica
This dissertation presents a software architecture called the Distributed Software Architecture for Autonomous Robots (DSAAR), which is designed to provide the fast development and prototyping of multi-robot systems. The DSAAR building blocks allow engineers to focus on the behavioural model of robots and collectives. This architecture is of special interest in domains where several human, robot, and software agents have to interact continuously. Thus, fast prototyping and reusability is a must. DSAAR tries to cope with these requirements towards an advanced solution to the n-humans and m-robots problem with a set of design good practices and development tools. This dissertation will also focus on Human-Robot Interaction, mainly on the subject of teleoperation. In teleoperation human judgement is an integral part of the process, heavily influenced by the telemetry data received from the remote environment. So the speed in which commands are given and the telemetry data is received, is of crucial importance. Using the DSAAR architecture a teleoperation approach is proposed. This approach was designed to provide all entities present in the network a shared reality, where every entity is an information source in an approach similar to the distributed blackboard. This solution was designed to accomplish a real time response, as well as, the completest perception of the robots’ surroundings. Experimental results obtained with the physical robot suggest that the system is able to guarantee a close interaction between users and robot.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Gómez, Betancur Gabriel J. "Adaptive learning mechanisms for autonomous robots /". Zürich, 2007. http://opac.nebis.ch/cgi-bin/showAbstract.pl?sys=000253587.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Hofer, Ludovic. "Decision-making algorithms for autonomous robots". Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0770/document.

Texto completo
Resumen
Afin d'être autonomes, les robots doivent êtres capables de prendre des décisions en fonction des informations qu'ils perçoivent de leur environnement. Cette thèse modélise les problèmes de prise de décision robotique comme des processus de décision markoviens avec un espace d'état et un espace d'action tous deux continus. Ce choix de modélisation permet de représenter les incertitudes sur le résultat des actions appliquées par le robot. Les nouveaux algorithmes d'apprentissage présentés dans cette thèse se focalisent sur l'obtention de stratégies applicables dans un domaine embarqué. Ils sont appliqués à deux problèmes concrets issus de la RoboCup, une compétition robotique internationale annuelle. Dans ces problèmes, des robots humanoïdes doivent décider de la puissance et de la direction de tirs afin de maximiser les chances de marquer et contrôler la commande d'une primitive motrice pour préparer un tir
The autonomy of robots heavily relies on their ability to make decisions based on the information provided by their sensors. In this dissertation, decision-making in robotics is modeled as continuous state and action markov decision process. This choice allows modeling of uncertainty on the results of the actions chosen by the robots. The new learning algorithms proposed in this thesis focus on producing policies which can be used online at a low computational cost. They are applied to real-world problems in the RoboCup context, an international robotic competition held annually. In those problems, humanoid robots have to choose either the direction and power of kicks in order to maximize the probability of scoring a goal or the parameters of a walk engine to move towards a kickable position
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Pace, Conrad J. "Autonomous safety management for mobile robots". Thesis, Lancaster University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.423907.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Karamanlis, Vasilios. "Mulltivariate motion planning of autonomous robots". Thesis, Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8705.

Texto completo
Resumen
A problem of motion control in robot motion planning is to find a smooth transition while going from one path to another. The key concept of our theory is the steering function, used to manipulate the motion of our vehicle. The steering function determines the robot's position and orientation by controlling path curvature and speed. We also present the - neutral switching method - algorithm that provides the autonomous vehicle with the capability to determine the best leaving point which allows for a smooth transition from one path to another in a model-based polygonal world. The above mentioned algorithm is thoroughly presented, analyzed, and programmed on a Unix workstation, and on the autonomous mobile robot Yamabico. The research data indicate that neutral switching method improved the transition results for polygon tracking, star tracking motion, and circle tracking. Moreover, neutral switching method enhances robot control and provides a more stable transition between paths than any previously known algorithm
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

McPhillips, Graeme. "The control of semi-autonomous robots". Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/15417.

Texto completo
Resumen
Includes bibliographical references.
Robotic soccer is an international area of research which involves multiple robots collaborating in an adversarial and dynamic environment. Although many different forms gf robotic soccer are played, the University of Cape Town (UCT) chose the RoboCup small» sized robot league, officially known as the F180 RoboSoccer league, as a means of pursuing robotics research within the institution. The robot soccer game is played between two teams of ï¬ ve robots on a carpeted surface that is 2.8 m long by 2.3 m wide. The robots have their own on-board controllers that execute instructions sent to them from a computer-based artiï¬ cial intelligence (AI) system. In order for the AI system to keep track of all the robots and the ball (an orange golf ball), a global vision system is utilised. This global vision system uses images captured from either one or multiple digital cameras mounted above the ï¬ eld of play to determine the position and orientation of the team's robots, the position of the other teams' robots and ï¬ nally the position of the ball. In the true spirit of competition and furthering research, the rules which govern F180 RoboSoccer league cover only the basic format of the game thereby leaving various aspects of the robots, global vision system and AI design open for development. Since there was no RoboSoccer research in existence at UCT prior to the inception of this researcher's Masters' thesis the task included both the establishment of this format of robotics research at the institution as well as the actual design and development of the robots and the associated components as outlined below. Developing a team of robots requires a wide array of knowledge and the research undertaken was accordingly broken into three key components; the design of the robots (which included their related electronics and on-board controller), the design of a vision system and the design of an Al system. The main focus of this author's work was on the design of the robots as well as the overall structuring and integration of the UCT F180 RoboSoccer team. In addition, the areas of the global vision system and AI system that were covered within the scope of this thesis, are also presented. Prototypes were developed and in the ï¬ rst the main emphasis was placed on the movement of the robot, with the design of the kicking mechanism only occurring subsequent to this. After the ï¬ rst competition in 2002, this ï¬ rst design was abandoned in favour of developing a simpler robot with which to continue development. This simpler robot became the second prototype which, after testing, was reï¬ ned into the competition robot for 2003. During this period, the Al and global vision systems were developed by undergraduate thesis students. This research was then incorporated where applicable and, ï¬ nally, the residual problem areas were again addressed by a collaboration of staff and students. Whilst the design and implementation of the robots was very successful, the vision system was not successfully implemented before the competition in 2003. Although an autonomous game of soccer was not successfully played in the 2003 competition, the UCT F180 RoboSoccer team had made a great deal of progress towards this goal and, consequently, a strong foundation for future robotic soccer research within UCT has been established.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Magnago, Valerio. "Uncertainty aware localization for autonomous robots". Doctoral thesis, Università degli studi di Trento, 2010. http://hdl.handle.net/11572/269357.

Texto completo
Resumen
Autonomous mobile robots are undergoing an impressive growth. They are successfully used in many different contexts ranging from service robots to autonomous vehicles. These robots are expected to move inside the environment and, in general, to perform some operation autonomously. Their reliability strongly depends on their capability to accommodate the uncertainty generated by their interaction with the physical world. The core functionality for every autonomous mobile robots is the ability to navigate autonomously inside a known environment. The navigation task can be decomposed in identify where to go, plan and follow the route to reach the goal. In order to follow the planned path the robot needs to accommodate the actuation noise. To accommodate these noise the knowledge of the pose and speed of the robot inside the environment is needed. The more accurate the localization of the robot, the better the actuation error can be compensated for. Localisation is the process of establishing the correspondence between a given map coordinate system and the robot local coordinate system relying on its perceptions of the environment and its motion. Sensors are affected by noise, and in time, ego-motion estimation alone diverges from the robot's true pose. Robot exteroceptive sensors can give fundamental information to reset the pose uncertainty and relocalise the robot inside the environment, hence mitigating the dead-reckoning process. Most of the localization systems presented in the state-of-the-art focus on the maximization of the localization accuracy by leveraging the natural features of the environment. In these systems, the maximum achievable accuracy is tightly coupled with the perceivable information embedded in the different regions of the environment. Therefore, the localization uncertainty cannot be adapted to the level of accuracy desired by the users and only few approaches can provide guarantees on the localization performance. In contrast, by infrastructuring the environment, it is possible to obtain a desired level of uncertainty. Current approaches tend to over-design the infrastructure in dimension and supported measurement frequency. They provide far more accuracy than required in most areas of the environment in order to guarantee the tightest constraints that often are required only in limited regions. The ability to adapt to the location-dependent uncertainty is more than just a desirable property for a localisation system, since it helps in the reduction of the system consumption, in the minimization of external infrastructures and in the relaxation of the assumptions to be made on the environment. In line with the considerations above, localisation throughout this thesis is not seen as the process that always has to maximise the accuracy of the estimated robot pose. On the contrary, localisation is considered as the process that minimises an objective function related to the infrastructure’s cost, to the power consumption and to the computation time, being subject to some requirements on the localization accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Magnago, Valerio. "Uncertainty aware localization for autonomous robots". Doctoral thesis, Università degli studi di Trento, 2010. http://hdl.handle.net/11572/269357.

Texto completo
Resumen
Autonomous mobile robots are undergoing an impressive growth. They are successfully used in many different contexts ranging from service robots to autonomous vehicles. These robots are expected to move inside the environment and, in general, to perform some operation autonomously. Their reliability strongly depends on their capability to accommodate the uncertainty generated by their interaction with the physical world. The core functionality for every autonomous mobile robots is the ability to navigate autonomously inside a known environment. The navigation task can be decomposed in identify where to go, plan and follow the route to reach the goal. In order to follow the planned path the robot needs to accommodate the actuation noise. To accommodate these noise the knowledge of the pose and speed of the robot inside the environment is needed. The more accurate the localization of the robot, the better the actuation error can be compensated for. Localisation is the process of establishing the correspondence between a given map coordinate system and the robot local coordinate system relying on its perceptions of the environment and its motion. Sensors are affected by noise, and in time, ego-motion estimation alone diverges from the robot's true pose. Robot exteroceptive sensors can give fundamental information to reset the pose uncertainty and relocalise the robot inside the environment, hence mitigating the dead-reckoning process. Most of the localization systems presented in the state-of-the-art focus on the maximization of the localization accuracy by leveraging the natural features of the environment. In these systems, the maximum achievable accuracy is tightly coupled with the perceivable information embedded in the different regions of the environment. Therefore, the localization uncertainty cannot be adapted to the level of accuracy desired by the users and only few approaches can provide guarantees on the localization performance. In contrast, by infrastructuring the environment, it is possible to obtain a desired level of uncertainty. Current approaches tend to over-design the infrastructure in dimension and supported measurement frequency. They provide far more accuracy than required in most areas of the environment in order to guarantee the tightest constraints that often are required only in limited regions. The ability to adapt to the location-dependent uncertainty is more than just a desirable property for a localisation system, since it helps in the reduction of the system consumption, in the minimization of external infrastructures and in the relaxation of the assumptions to be made on the environment. In line with the considerations above, localisation throughout this thesis is not seen as the process that always has to maximise the accuracy of the estimated robot pose. On the contrary, localisation is considered as the process that minimises an objective function related to the infrastructure’s cost, to the power consumption and to the computation time, being subject to some requirements on the localization accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Nortman, Scott D. "Design, construction, and control of an autonomous humanoid robot". [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000147.

Texto completo
Resumen
Thesis (M.S.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains vii, 68 p.; also contains graphics. Includes vita. Includes bibliographical references.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Cave, Gary L. "Development and control of robotic arms for the Naval Postgraduate School Planar Autonomous Docking Simulator (NPADS)". Thesis, Monterey, California. Naval Postgraduate School, 2002. http://hdl.handle.net/10945/4614.

Texto completo
Resumen
Approved for public release, distribution is unlimited
The objective of this thesis was to design, construct and develop the initial autonomous control algorithm for the NPS Planar Autonomous Docking Simulator (NPADS). The effort included hardware design, fabrication, installation and integration; mass property determination; and the development and testing of control laws utilizing MATLAB and Simulink for modeling and LabView for NPADS control. The NPADS vehicle uses air pads and a granite table to simulate a 2-D, drag-free, zero-g space environment. It is a completely self-contained vehicle equipped with eight cold-gas, bang-bang type thrusters and a reaction wheel for motion control. A "star sensor" CCD camera locates the vehicle on the table while a color CCD docking camera and two robotic arms will locate and dock with a target vehicle. The on-board computer system leverages PXI technology and a single source, simplifying systems integration. The vehicle is powered by two lead-acid batteries for completely autonomous operation. A graphical user interface and wireless Ethernet enable the user to command and monitor the vehicle from a remote command and data acquisition computer. Two control algorithms were developed and allow the user to either control the thrusters and reaction wheel manually or simply specify a desired location.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Holland, Courtney L. "Characterization of robotic tail orientation as a function of platform position for surf-zone robots". Thesis, Monterey, Calif. : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Jun/09Jun%5FHolland.pdf.

Texto completo
Resumen
Thesis (M.S. in Applied Physics)--Naval Postgraduate School, June 2009.
Thesis Advisor(s): Harkins, Richard. "June 2009." Description based on title screen as viewed on July 10, 2009. Author(s) subject terms: Amphibious, Autonomous, Robotics, WHEGS. Includes bibliographical references (p. 84-85). Also available in print.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Ward, Jason L. "Design of a prototype autonomous amphibious WHEGS robot for surf-zone operations". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FWard.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Bautista, Ballester Jordi. "Human-robot interaction and computer-vision-based services for autonomous robots". Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/398647.

Texto completo
Resumen
L'Aprenentatge per Imitació (IL), o Programació de robots per Demostració (PbD), abasta mètodes pels quals un robot aprèn noves habilitats a través de l'orientació humana i la imitació. La PbD s'inspira en la forma en què els éssers humans aprenen noves habilitats per imitació amb la finalitat de desenvolupar mètodes pels quals les noves tasques es poden transferir als robots. Aquesta tesi està motivada per la pregunta genèrica de "què imitar?", Que es refereix al problema de com extreure les característiques essencials d'una tasca. Amb aquesta finalitat, aquí adoptem la perspectiva del Reconeixement d'Accions (AR) per tal de permetre que el robot decideixi el què cal imitar o inferir en interactuar amb un ésser humà. L'enfoc proposat es basa en un mètode ben conegut que prové del processament del llenguatge natural: és a dir, la bossa de paraules (BoW). Aquest mètode s'aplica a grans bases de dades per tal d'obtenir un model entrenat. Encara que BoW és una tècnica d'aprenentatge de màquines que s'utilitza en diversos camps de la investigació, en la classificació d'accions per a l'aprenentatge en robots està lluny de ser acurada. D'altra banda, se centra en la classificació d'objectes i gestos en lloc d'accions. Per tant, en aquesta tesi es demostra que el mètode és adequat, en escenaris de classificació d'accions, per a la fusió d'informació de diferents fonts o de diferents assajos. Aquesta tesi fa tres contribucions: (1) es proposa un mètode general per fer front al reconeixement d'accions i per tant contribuir a l'aprenentatge per imitació; (2) la metodologia pot aplicar-se a grans bases de dades, que inclouen diferents modes de captura de les accions; i (3) el mètode s'aplica específicament en un projecte internacional d'innovació real anomenat Vinbot.
El Aprendizaje por Imitación (IL), o Programación de robots por Demostración (PbD), abarca métodos por los cuales un robot aprende nuevas habilidades a través de la orientación humana y la imitación. La PbD se inspira en la forma en que los seres humanos aprenden nuevas habilidades por imitación con el fin de desarrollar métodos por los cuales las nuevas tareas se pueden transferir a los robots. Esta tesis está motivada por la pregunta genérica de "qué imitar?", que se refiere al problema de cómo extraer las características esenciales de una tarea. Con este fin, aquí adoptamos la perspectiva del Reconocimiento de Acciones (AR) con el fin de permitir que el robot decida lo que hay que imitar o inferir al interactuar con un ser humano. El enfoque propuesto se basa en un método bien conocido que proviene del procesamiento del lenguaje natural: es decir, la bolsa de palabras (BoW). Este método se aplica a grandes bases de datos con el fin de obtener un modelo entrenado. Aunque BoW es una técnica de aprendizaje de máquinas que se utiliza en diversos campos de la investigación, en la clasificación de acciones para el aprendizaje en robots está lejos de ser acurada. Además, se centra en la clasificación de objetos y gestos en lugar de acciones. Por lo tanto, en esta tesis se demuestra que el método es adecuado, en escenarios de clasificación de acciones, para la fusión de información de diferentes fuentes o de diferentes ensayos. Esta tesis hace tres contribuciones: (1) se propone un método general para hacer frente al reconocimiento de acciones y por lo tanto contribuir al aprendizaje por imitación; (2) la metodología puede aplicarse a grandes bases de datos, que incluyen diferentes modos de captura de las acciones; y (3) el método se aplica específicamente en un proyecto internacional de innovación real llamado Vinbot.
Imitation Learning (IL), or robot Programming by Demonstration (PbD), covers methods by which a robot learns new skills through human guidance and imitation. PbD takes its inspiration from the way humans learn new skills by imitation in order to develop methods by which new tasks can be transmitted to robots. This thesis is motivated by the generic question of “what to imitate?” which concerns the problem of how to extract the essential features of a task. To this end, here we adopt Action Recognition (AR) perspective in order to allow the robot to decide what has to be imitated or inferred when interacting with a human kind. The proposed approach is based on a well-known method from natural language processing: namely, Bag of Words (BoW). This method is applied to large databases in order to obtain a trained model. Although BoW is a machine learning technique that is used in various fields of research, in action classification for robot learning it is far from accurate. Moreover, it focuses on the classification of objects and gestures rather than actions. Thus, in this thesis we show that the method is suitable in action classification scenarios for merging information from different sources or different trials. This thesis makes three contributions: (1) it proposes a general method for dealing with action recognition and thus to contribute to imitation learning; (2) the methodology can be applied to large databases which include different modes of action captures; and (3) the method is applied specifically in a real international innovation project called Vinbot.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Cowling, Michael y n/a. "Non-Speech Environmental Sound Classification System for Autonomous Surveillance". Griffith University. School of Information Technology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20040428.152425.

Texto completo
Resumen
Sound is one of a human beings most important senses. After vision, it is the sense most used to gather information about the environment. Despite this, comparatively little research has been done into the field of sound recognition. The research that has been done mainly centres around the recognition of speech and music. Our auditory environment is made up of many sounds other than speech and music. This sound information can be taped into for the benefit of specific applications such as security systems. Currently, most researchers are ignoring this sound information. This thesis investigates techniques to recognise environmental non-speech sounds and their direction, with the purpose of using these techniques in an autonomous mobile surveillance robot. It also presents advanced methods to improve the accuracy and efficiency of these techniques. Initially, this report presents an extensive literature survey, looking at the few existing techniques for non-speech environmental sound recognition. This survey also, by necessity, investigates existing techniques used for sound recognition in speech and music. It also examines techniques used for direction detection of sounds. The techniques that have been identified are then comprehensively compared to determine the most appropriate techniques for non-speech sound recognition. A comprehensive comparison is performed using non-speech sounds and several runs are performed to ensure accuracy. These techniques are then ranked based on their effectiveness. The best technique is found to be either Continuous Wavelet Transform feature extraction with Dynamic Time Warping or Mel-Frequency Cepstral Coefficients with Dynamic Time Warping. Both of these techniques achieve a 70% recognition rate. Once the best of the existing classification techniques is identified, the problem of uncountable sounds in the environment can be addressed. Unlike speech recognition, non-speech sound recognition requires recognition from a much wider library of sounds. Due to this near-infinite set of example sounds, the characteristics and complexity of non-speech sound recognition techniques increases. To address this problem, a systematic scheme needs to be developed for non-speech sound classification. Several different approaches are examined. Included is a new design for an environmental sound taxonomy based on an environmental sound alphabet. This taxonomy works over three levels and classifies sounds based on their physical characteristics. Its performance is compared with a technique that generates a structured tree automatically. These structured techniques are compared for different data sets and results are analysed. Comparable results are achieved for these techniques with the same data set as previously used. In addition, the results and greater information from these experiments is used to infer some information about the structure of environmental sounds in general. Finally, conclusions are drawn on both sets of techniques and areas of future research stemming from this thesis are explored.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Cowling, Michael. "Non-Speech Environmental Sound Classification System for Autonomous Surveillance". Thesis, Griffith University, 2004. http://hdl.handle.net/10072/365386.

Texto completo
Resumen
Sound is one of a human beings most important senses. After vision, it is the sense most used to gather information about the environment. Despite this, comparatively little research has been done into the field of sound recognition. The research that has been done mainly centres around the recognition of speech and music. Our auditory environment is made up of many sounds other than speech and music. This sound information can be taped into for the benefit of specific applications such as security systems. Currently, most researchers are ignoring this sound information. This thesis investigates techniques to recognise environmental non-speech sounds and their direction, with the purpose of using these techniques in an autonomous mobile surveillance robot. It also presents advanced methods to improve the accuracy and efficiency of these techniques. Initially, this report presents an extensive literature survey, looking at the few existing techniques for non-speech environmental sound recognition. This survey also, by necessity, investigates existing techniques used for sound recognition in speech and music. It also examines techniques used for direction detection of sounds. The techniques that have been identified are then comprehensively compared to determine the most appropriate techniques for non-speech sound recognition. A comprehensive comparison is performed using non-speech sounds and several runs are performed to ensure accuracy. These techniques are then ranked based on their effectiveness. The best technique is found to be either Continuous Wavelet Transform feature extraction with Dynamic Time Warping or Mel-Frequency Cepstral Coefficients with Dynamic Time Warping. Both of these techniques achieve a 70% recognition rate. Once the best of the existing classification techniques is identified, the problem of uncountable sounds in the environment can be addressed. Unlike speech recognition, non-speech sound recognition requires recognition from a much wider library of sounds. Due to this near-infinite set of example sounds, the characteristics and complexity of non-speech sound recognition techniques increases. To address this problem, a systematic scheme needs to be developed for non-speech sound classification. Several different approaches are examined. Included is a new design for an environmental sound taxonomy based on an environmental sound alphabet. This taxonomy works over three levels and classifies sounds based on their physical characteristics. Its performance is compared with a technique that generates a structured tree automatically. These structured techniques are compared for different data sets and results are analysed. Comparable results are achieved for these techniques with the same data set as previously used. In addition, the results and greater information from these experiments is used to infer some information about the structure of environmental sounds in general. Finally, conclusions are drawn on both sets of techniques and areas of future research stemming from this thesis are explored.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information Technology
Full Text
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Marvel, Jeremy Alan. "Autonomous Learning for Robotic Assembly Applications". Cleveland, Ohio : Case Western Reserve University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1268187684.

Texto completo
Resumen
Thesis (Doctor of Philosophy)--Case Western Reserve University, 2010
Department of EECS - Computer Engineering Title from PDF (viewed on 2010-05-25) Includes abstract Includes bibliographical references and appendices Available online via the OhioLINK ETD Center
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Chen, Xingping. "Robust nonlinear trailing control for multiple mobile autonomous agents formation". Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155591282.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Lovell, Nathan y N/A. "Machine Vision as the Primary Sensory Input for Mobile, Autonomous Robots". Griffith University. School of Information and Communication Technology, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070911.152447.

Texto completo
Resumen
Image analysis, and its application to sensory input (computer vision) is a fairly mature field, so it is surprising that its techniques are not extensively used in robotic applications. The reason for this is that, traditionally, robots have been used in controlled environments where sophisticated computer vision was not necessary, for example in car manufacturing. As the field of robotics has moved toward providing general purpose robots that must function in the real world, it has become necessary that the robots be provided with robust sensors capable of understanding the complex world around them. However, when researchers apply techniques previously studied in image analysis literature to the field of robotics, several difficult problems emerge. In this thesis we examine four reasons why it is difficult to apply work in image analysis directly to real-time, general purpose computer vision applications. These are: improvement in the computational complexity of image analysis algorithms, robustness to dynamic and unpredictable visual conditions, independence from domain specific knowledge in object recognition and the development of debugging facilities. This thesis examines each of these areas making several innovative contributions in each area. We argue that, although each area is distinct, improvement must be made in all four areas before vision will be utilised as the primary sensory input for mobile, autonomous robotic applications. In the first area, the computational complexity of image analysis algorithms, we note the dependence of a large number of high-level processing routines on a small number of low-level algorithms. Therefore, improvement to a small set of highly utilised algorithms will yield benefits in a large number of applications. In this thesis we examine the common tasks of image segmentation, edge and straight line detection and vectorisation. In the second area, robustness to dynamic and unpredictable conditions, we examine how vision systems can be made more tolerant to changes of illumination in the visual scene. We examine the classical image segmentation task and present a method for illumination independence that builds on our work from the first area. The third area is the reliance on domain-specific knowledge in object recognition. Many current systems depend on a large amount of hard-coded domainspecific knowledge to understand the world around them. This makes the system hard to modify, even for slight changes in the environment, and very difficult to apply in a different context entirely. We present an XML-based language, the XML Object Definition (XOD) language, as a solution to this problem. The language is largely descriptive instead of imperative so, instead of describing how to locate objects within each image, the developer simply describes the properties of the objects. The final area is the development of support tools. Vision system programming is extremely difficult because large amounts of data are handled at a very fast rate. If the system is running on an embedded device (such as a robot) then locating defects in the code is a time consuming and frustrating task. Many development-support applications are available for specific applications. We present a general purpose development-support tool for embedded, real-time vision systems. The primary case study for this research is that of Robotic soccer, in the international RoboCup Four-Legged league. We utilise all of the research of this thesis to provide the first illumination-independent object recognition system for RoboCup. Furthermore we illustrate the flexibility of our system by applying it to several other tasks and to marked changes in the visual environment for RoboCup itself.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Lovell, Nathan. "Machine Vision as the Primary Sensory Input for Mobile, Autonomous Robots". Thesis, Griffith University, 2006. http://hdl.handle.net/10072/367107.

Texto completo
Resumen
Image analysis, and its application to sensory input (computer vision) is a fairly mature field, so it is surprising that its techniques are not extensively used in robotic applications. The reason for this is that, traditionally, robots have been used in controlled environments where sophisticated computer vision was not necessary, for example in car manufacturing. As the field of robotics has moved toward providing general purpose robots that must function in the real world, it has become necessary that the robots be provided with robust sensors capable of understanding the complex world around them. However, when researchers apply techniques previously studied in image analysis literature to the field of robotics, several difficult problems emerge. In this thesis we examine four reasons why it is difficult to apply work in image analysis directly to real-time, general purpose computer vision applications. These are: improvement in the computational complexity of image analysis algorithms, robustness to dynamic and unpredictable visual conditions, independence from domain specific knowledge in object recognition and the development of debugging facilities. This thesis examines each of these areas making several innovative contributions in each area. We argue that, although each area is distinct, improvement must be made in all four areas before vision will be utilised as the primary sensory input for mobile, autonomous robotic applications. In the first area, the computational complexity of image analysis algorithms, we note the dependence of a large number of high-level processing routines on a small number of low-level algorithms. Therefore, improvement to a small set of highly utilised algorithms will yield benefits in a large number of applications. In this thesis we examine the common tasks of image segmentation, edge and straight line detection and vectorisation. In the second area, robustness to dynamic and unpredictable conditions, we examine how vision systems can be made more tolerant to changes of illumination in the visual scene. We examine the classical image segmentation task and present a method for illumination independence that builds on our work from the first area. The third area is the reliance on domain-specific knowledge in object recognition. Many current systems depend on a large amount of hard-coded domainspecific knowledge to understand the world around them. This makes the system hard to modify, even for slight changes in the environment, and very difficult to apply in a different context entirely. We present an XML-based language, the XML Object Definition (XOD) language, as a solution to this problem. The language is largely descriptive instead of imperative so, instead of describing how to locate objects within each image, the developer simply describes the properties of the objects. The final area is the development of support tools. Vision system programming is extremely difficult because large amounts of data are handled at a very fast rate. If the system is running on an embedded device (such as a robot) then locating defects in the code is a time consuming and frustrating task. Many development-support applications are available for specific applications. We present a general purpose development-support tool for embedded, real-time vision systems. The primary case study for this research is that of Robotic soccer, in the international RoboCup Four-Legged league. We utilise all of the research of this thesis to provide the first illumination-independent object recognition system for RoboCup. Furthermore we illustrate the flexibility of our system by applying it to several other tasks and to marked changes in the visual environment for RoboCup itself.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Full Text
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Fetzek, Charles A. "Behavior-based power management in autonomous mobile robots". Wright-Patterson AFB : Air Force Institute of Technology, 2008. http://handle.dtic.mil/100.2/ADA487084.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Axelsson, Henrik. "Hybrid control of multiple autonomous mobile robots". Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15439.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Gat, Erann. "Reliable goal-directed reactive control of autonomous mobile robots". Diss., This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-07282008-134502/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Crous, C. B. "Autonomous robot path planning". Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2519.

Texto completo
Resumen
Thesis (MSc (Mathematical Sciences. Computer SCience))--University of Stellenbosch, 2009.
In this thesis we consider the dynamic path planning problem for robotics. The dynamic path planning problem, in short, is the task of determining an optimal path, in terms of minimising a given cost function, from one location to another within a known environment of moving obstacles. Our goal is to investigate a number of well-known path planning algorithms, to determine for which circumstances a particular algorithm is best suited, and to propose changes to existing algorithms to make them perform better in dynamic environments. At this stage no thorough comparison of theoretical and actual running times of path planning algorithms exist. Our main goal is to address this shortcoming by comparing some of the wellknown path planning algorithms and our own improvements to these path planning algorithms in a simulation environment. We show that the visibility graph representation of the environment combined with the A* algorithm provides very good results for both path length and computational cost, for a relatively small number of obstacles. As for a grid representation of the environment, we show that the A* algorithm produces good paths in terms of length and the amount of rotation and it requires less computation than dynamic algorithms such as D* and D* Lite.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Pini, Giovanni. "Towards autonomous task partitioning in swarm robotics: experiments with foraging robots". Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209469.

Texto completo
Resumen
In this thesis, we propose an approach to achieve autonomous task partitioning in swarms of robots. Task partitioning is the process by which tasks are decomposed into sub-tasks and it is often an advantageous way of organizing work in groups of individuals. Therefore, it is interesting to study its application to swarm robotics, in which groups of robots are deployed to collectively carry out a mission. The capability of partitioning tasks autonomously can enhance the flexibility of swarm robotics systems because the robots can adapt the way they decompose and perform their work depending on specific environmental conditions and goals. So far, few studies have been presented on the topic of task partitioning in the context of swarm robotics. Additionally, in all the existing studies, there is no separation between the task partitioning methods and the behavior of the robots and often task partitioning relies on characteristics of the environments in which the robots operate.

This limits the applicability of these methods to the specific contexts for which they have been built. The work presented in this thesis represents the first steps towards a general framework for autonomous task partitioning in swarms of robots. We study task partitioning in foraging, since foraging abstracts practical real-world problems. The approach we propose in this thesis is therefore studied in experiments in which the goal is to achieve autonomous task partitioning in foraging. However, in the proposed approach, the task partitioning process relies upon general, task-independent concepts and we are therefore confident that it is applicable in other contexts. We identify two main capabilities that the robots should have: i) being capable of selecting whether to employ task partitioning and ii) defining the sub-tasks of a given task. We propose and study algorithms that endow a swarm of robots with these capabilities.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Dag, Antymos. "Autonomous Indoor Navigation System for Mobile Robots". Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129419.

Texto completo
Resumen
With an increasing need for greater traffic safety, there is an increasing demand for means by which solutions to the traffic safety problem can be studied. The purpose of this thesis is to investigate the feasibility of using an autonomous indoor navigation system as a component in a demonstration system for studying cooperative vehicular scenarios. Our method involves developing and evaluating such a navigation system. Our navigation system uses a pre-existing localization system based on passive RFID, odometry and a particle filter. The localization system is used to estimate the robot pose, which is used to calculate a trajectory to the goal. A control system with a feedback loop is used to control the robot actuators and to drive the robot to the goal.   The results of our evaluation tests show that the system generally fulfills the performance requirements stated for the tests. There is however some uncertainty about the consistency of its performance. Results did not indicate that this was caused by the choice of localization techniques. The conclusion is that an autonomous navigation system using the aforementioned localization techniques is plausible for use in a demonstration system. However, we suggest that the system is further tested and evaluated before it is used with applications where accuracy is prioritized.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Akanyeti, Otar. "Automatic code generation for autonomous mobile robots". Thesis, University of Essex, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499781.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Hamilton, Kelvin. "An integrated diagnostic architecture for autonomous robots". Thesis, Heriot-Watt University, 2002. http://hdl.handle.net/10399/100.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Neto, Hugo Vieira. "Visual novelty detection for autonomous inspection robots". Thesis, University of Essex, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428977.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Rajan, Vishnu Arun Kumar Thumatty. "Tether management techniques for autonomous mobile robots". Thesis, University of Manchester, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517864.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Colombini, Esther Luna. "Module-based learning in autonomous mobile robots". Instituto Tecnológico de Aeronáutica, 2005. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=213.

Texto completo
Resumen
A informação disponível para robôs em tarefas reais encontra-se amplamente distribuída tanto no espaço quanto no tempo, fazendo com que o agente busque informações relevantes. Neste trabalho, uma solução que usa conhecimento qualitativo e quantitativo da tarefa é implementada a fim de permitir que tarefas robóticas reais sejam tratáveis por algoritmos de Aprendizagem por Reforço (AR). Os passos deste procedimento incluem: 1) decompor a tarefa completa em tarefas menores, usando abstração e macro-operadores, para que um espaço de ações discreto seja atingido; 2) aplicar um modelo de representação do espaço de estados a fim de atingir discretização tanto no espaço de estados quanto no de tempo; 3) usar conhecimento quantitativo para projetar controladores capazes de resolver as subtarefas; 4) aprender a coordenação destes comportamentos usando AR, mais especificamente o algoritmo Q-learning. O método proposto foi verificado em um conjunto de tarefas de complexidade crescente por meio de um simulador para o robô Khepera. Dois modelos de discretização para o espaço de estados foram usados, um baseado em estados e outro baseado em atributos --- funções de observação do ambiente. As políticas aprendidas sobre estes dois modelos foram comparadas a uma política pré-definida. Os resultados mostraram que a política aprendida sobre o modelo de discretização baseado em estados leva mais rapidamente a resultados melhores, apesar desta não poder ser aplicada a tarefas mais complexas, onde o espaço de estados sob esta representação se torna computacionalmente inviável e onde um método de generalização deve ser aplicado. O método de generalização escolhido implementa a estrutura CMAC ( extit{Cerebellar Model Articulation Controller}) sobre o modelo de discretização baseado em estados. Os resultados mostraram que a representação compacta permite que o algoritmo de aprendizagem seja aplicado sobre este modelo, apesar de que, para este caso, a política aprendida sob o modelo de discretização baseado em atributos apresenta melhor performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Vieira, Neto Hugo. "Visual novelty detection for autonomous inspection robots". University of Essex, 2006. http://repositorio.utfpr.edu.br/jspui/handle/1/644.

Texto completo
Resumen
CAPES
Mobile robot applications that involve automated exploration and inspection of environments are often dependant on novelty detection, the ability to differentiate between common and uncommon perceptions. Because novelty can be anything that deviates from the normal context, we argue that in order to implement a novelty filter it is necessary to exploit the robot's sensory data from the ground up, building models of normality rather than abnormality. In this work we use unrestricted colour visual data as perceptual input to on-line incremental learning algorithms. Unlike other sensor modalities, vision can provide a variety of useful information about the environment through massive amounts of data, which often need to be reduced for realtime operation. Here we use mechanisms of visual attention to select candidate image regions to be encoded and fed to higher levels of processing, enabling the localisation of novel features within the input image frame. An extensive series of experiments using visual input, obtained by a real mobile robot interacting with laboratory and medium-scale real world environments, are used to discuss different visual novelty filter configurations. We compare performance and functionality of novelty detection mechanisms based on the Grow-When-Required neural network and incremental Principal Component Analysis. Results are assessed using both qualitative and quantitative methods, demonstrating advantages and disadvantages of each investigated approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Beck, Zoltan. "Collaborative search and rescue by autonomous robots". Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/411031/.

Texto completo
Resumen
In recent years, professional first responders have started to use novel technologies at the scene of disasters in order to save more lives. Increasingly, they use robots to search disaster sites. One of the most widely and successfully used robot platforms in the disaster response domain are unmanned aerial vehicles (UAVs). UAVs allow remote inspection and mapping. They are able to provide high resolution imagery and often need minimal infrastructure to fly. This allows settings where multiple UAVs are airborne accelerating the information gathering from the disaster site. However, current deployments use labour intensive, individually teleoperated UAVs. Given this, there is a drive toward using multiple robots operating with a certain level of autonomy, in order to decrease the operators' workload. One approach for utilising multiple robots in this way is semi-autonomous operation supervised by a small number of professionals; only requiring human experts for crucial decisions. Current commercial UAV platforms also allow the deployment of a diverse group of robots, allowing them to combine their individual capabilities to be more ecient. For example, xed-wing UAVs are capable of flying faster and carry larger payload, but when they do so, they should be deployed with higher safety measures (safety pilots are required for non-lightweight aircraft). On the other hand, small rotary-wing UAVs are more agile and can approach and provide imagery about objects on the ground. To this end, this thesis develops a number of new approaches for the collaboration of a heterogeneous group of robots in disaster response. More specifically, the problem of collaborative planning with robots operating in an uncertain workflow based setting is investigated by solving the search and rescue (SAR) collaboration problem. Of course, the problem complexity increases when collaborating with dierent robots. It is not different in this setting, the actions of dierent types of robots need to be planned with dependencies between their actions under uncertainty. To date, research on collaboration between multiple robots has typically focused on known settings, where the possible robot actions are dened as a set of tasks. However, in most real world settings, there is a signicant amount of uncertainty present. For ii example, information about a disaster site develops gradually during disaster relief, thus initially there is often very little certainty about the locations of people requiring assistance (e.g. damaged buildings, trapped victims, or supply shortages). Existing solutions that tackle collaboration in the face of uncertain information are typically limited to simple exploration or target search problems. Moreover, the use of generic temporal planners rapidly becomes intractable for such problems unless applied in a domain-specific manner. Finally, domain specific approaches rarely involve complex action relations, such as task dependencies where the actions of some robots are built on the actions of others. When they do so, decomposition techniques are applied to decrease the problem complexity, or simple heuristics are applied to enhance similar collaboration. Such approaches often lead to low quality solutions, because vital action dependencies across different roles are not taken into account during the optimisation. Against this background, we oer novel online planning approaches for heterogeneous multi-robot collaboration under uncertainty. First, we provide a negotiation-based bidirectional collaborative planning approach that exploits the potential in determinisation via hindsight optimisation (HOP) combined with long-term planning. Second, we extend this approach to create an anytime Monte Carlo tree search planner that also utilises HOP combined with long-term planning. In online planning settings, such as SAR, anytime planners are benecial to ensure the ability of providing a feasible plan within the given computational budget. Third, we construct a scenario close to physical deployment that allows us to show how our long-term collaborative planning outperforms the current state of the art path-planning approaches by 25 %. We conclude that long-term collaborative planning under uncertainty provides an improvement when planning in SAR settings. When combined, the contributions presented in this thesis represent an advancement in the state of the art in the eld of online planning under uncertainty. The approaches and methods presented can be applied in collaborative settings when uncertainty plays an important role for defining dependencies between partial planning problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Mikhalsky, Maxim. "Efficient biomorphic vision for autonomous mobile robots". Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16206/1/Maxim_Mikhalsky_Thesis.pdf.

Texto completo
Resumen
Autonomy is the most enabling and the least developed robot capability. A mobile robot is autonomous if capable of independently attaining its objectives in unpredictable environment. This requires interaction with the environment by sensing, assessing, and responding to events. Such interaction has not been achieved. The core problem consists in limited understanding of robot autonomy and its aspects, and is exacerbated by the limited resources available in a small autonomous mobile robot such as energy, information, and space. This thesis describes an efficient biomorphic visual capability that can provide purposeful interaction with environment for a small autonomous mobile robot. The method used for achieving this capability comprises synthesis of an integral paradigm of a purposeful autonomous mobile robot, formulation of requirements for the visual capability, and development of efficient algorithmic and technological solutions. The paradigm is a product of analysis of fundamental aspects of the problem, and the insights found in inherently autonomous biological organisms. Based on this paradigm, analysis of the biological vision and the available technological basis, and the state-of-the-art in vision algorithms, the requirements were formulated for a biomorphic visual capability that provides the situation awareness capability for a small autonomous mobile robot. The developed visual capability is comprised of a sensory and processing architecture, an integral set of motion vision algorithms, and a method for visual ranging of still objects that is based on them. These vision algorithms provide motion detection, fixation, and tracking functionality with low latency and computational complexity. High temporal resolution of CMOS imagers is exploited for reducing the logical complexity of image analysis, and consequently the computational complexity of the algorithms. The structure of the developed algorithms conforms to the arithmetic and memory resources available in a system on a programmable chip (SoPC), which allows complete confinement of the high-bandwidth datapath within a SoPC device and therefore high-speed operation by design. The algorithms proved to be functional, which validates the developed visual capability. The experiments confirm that high temporal resolution imaging simplifies image motion structure, and ultimately the design of the robot vision system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Mikhalsky, Maxim. "Efficient biomorphic vision for autonomous mobile robots". Queensland University of Technology, 2006. http://eprints.qut.edu.au/16206/.

Texto completo
Resumen
Autonomy is the most enabling and the least developed robot capability. A mobile robot is autonomous if capable of independently attaining its objectives in unpredictable environment. This requires interaction with the environment by sensing, assessing, and responding to events. Such interaction has not been achieved. The core problem consists in limited understanding of robot autonomy and its aspects, and is exacerbated by the limited resources available in a small autonomous mobile robot such as energy, information, and space. This thesis describes an efficient biomorphic visual capability that can provide purposeful interaction with environment for a small autonomous mobile robot. The method used for achieving this capability comprises synthesis of an integral paradigm of a purposeful autonomous mobile robot, formulation of requirements for the visual capability, and development of efficient algorithmic and technological solutions. The paradigm is a product of analysis of fundamental aspects of the problem, and the insights found in inherently autonomous biological organisms. Based on this paradigm, analysis of the biological vision and the available technological basis, and the state-of-the-art in vision algorithms, the requirements were formulated for a biomorphic visual capability that provides the situation awareness capability for a small autonomous mobile robot. The developed visual capability is comprised of a sensory and processing architecture, an integral set of motion vision algorithms, and a method for visual ranging of still objects that is based on them. These vision algorithms provide motion detection, fixation, and tracking functionality with low latency and computational complexity. High temporal resolution of CMOS imagers is exploited for reducing the logical complexity of image analysis, and consequently the computational complexity of the algorithms. The structure of the developed algorithms conforms to the arithmetic and memory resources available in a system on a programmable chip (SoPC), which allows complete confinement of the high-bandwidth datapath within a SoPC device and therefore high-speed operation by design. The algorithms proved to be functional, which validates the developed visual capability. The experiments confirm that high temporal resolution imaging simplifies image motion structure, and ultimately the design of the robot vision system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Cowlagi, Raghvendra V. "Hierarchical motion planning for autonomous aerial and terrestrial vehicles". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41066.

Texto completo
Resumen
Autonomous mobile robots - both aerial and terrestrial vehicles - have gained immense importance due to the broad spectrum of their potential military and civilian applications. One of the indispensable requirements for the autonomy of a mobile vehicle is the vehicle's capability of planning and executing its motion, that is, finding appropriate control inputs for the vehicle such that the resulting vehicle motion satisfies the requirements of the vehicular task. The motion planning and control problem is inherently complex because it involves two disparate sub-problems: (1) satisfaction of the vehicular task requirements, which requires tools from combinatorics and/or formal methods, and (2) design of the vehicle control laws, which requires tools from dynamical systems and control theory. Accordingly, this problem is usually decomposed and solved over two levels of hierarchy. The higher level, called the geometric path planning level, finds a geometric path that satisfies the vehicular task requirements, e.g., obstacle avoidance. The lower level, called the trajectory planning level, involves sufficient smoothening of this geometric path followed by a suitable time parametrization to obtain a reference trajectory for the vehicle. Although simple and efficient, such hierarchical separation suffers a serious drawback: the geometric path planner has no information of the kinematic and dynamic constraints of the vehicle. Consequently, the geometric planner may produce paths that the trajectory planner cannot transform into a feasible reference trajectory. Two main ideas appear in the literature to remedy this problem: (a) randomized sampling-based planning, which eliminates altogether the geometric planner by planning in the vehicle state space, and (b) geometric planning supported by feedback control laws. The former class of methods suffer from a lack of optimality of the resultant trajectory, while the latter class of methods makes a restrictive assumption concerning the vehicle kinematic model. In this thesis, we propose a hierarchical motion planning framework based on a novel mode of interaction between these two levels of planning. This interaction rests on the solution of a special shortest-path problem on graphs, namely, one using costs defined on multiple edge transitions in the path instead of the usual single edge transition costs. These costs are provided by a local trajectory generation algorithm, which we implement using model predictive control and the concept of effective target sets for simplifying the non-convex constraints involved in the problem. The proposed motion planner ensures "consistency" between the two levels of planning, i.e., a guarantee that the higher level geometric path is always associated with a kinematically and dynamically feasible trajectory. We show that the proposed motion planning approach offers distinct advantages in comparison with the competing approaches of discretization of the state space, of randomized sampling-based motion planning, and of local feedback-based, decoupled hierarchical motion planning. Finally, we propose a multi-resolution implementation of the proposed motion planner, which requires accurate descriptions of the environment and the vehicle only for short-term, local motion planning in the immediate vicinity of the vehicle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Stoytchev, Alexander. "Robot Tool Behavior: A Developmental Approach to Autonomous Tool Use". Diss., Available online, Georgia Institute of Technology, 2007, 2007. http://etd.gatech.edu/theses/available/etd-06112007-013056/.

Texto completo
Resumen
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2008.
Isbell, Charles, Committee Member ; Lipkin, Harvey, Committee Member ; Balch, Tucker, Committee Member ; Bobick, Aaron, Committee Member ; Arkin, Ronald, Committee Chair.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Hague, Tony. "Motion planning for autonomous guided vehicles". Thesis, University of Oxford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358592.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía