Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Interaction robot-Robot.

Dissertationen zum Thema „Interaction robot-Robot“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Interaction robot-Robot" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Akan, Batu. „Human Robot Interaction Solutions for Intuitive Industrial Robot Programming“. Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14315.

Der volle Inhalt der Quelle
Annotation:
Over the past few decades the use of industrial robots has increased the efficiency as well as competitiveness of many companies. Despite this fact, in many cases, robot automation investments are considered to be technically challenging. In addition, for most small and medium sized enterprises (SME) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods for industrial robots are too complex for an inexperienced robot programmer, thus assistance from a robot programming expert is often needed.  We hypothesize that in order to make industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis we propose a high-level natural language framework for interacting with industrial robots through an instructional programming environment for the user.  The ultimate goal of this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.In this thesis we mainly address two issues. The first issue is to make interaction with a robot easier and more natural through a multimodal framework. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than programming issues of the robot. This approach shifts the focus of industrial robot programming from the coordinate based programming paradigm, which currently dominates the field, to an object based programming scheme.The second issue addressed is a general framework for implementing multimodal interfaces. There have been numerous efforts to implement multimodal interfaces for computers and robots, but there is no general standard framework for developing them. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline and includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.
robot colleague project
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ali, Muhammad. „Contribution to decisional human-robot interaction: towards collaborative robot companions“. Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00719684.

Der volle Inhalt der Quelle
Annotation:
L'interaction homme-robot arrive dans une phase intéressante ou la relation entre un homme et un robot est envisage comme 'un partenariat plutôt que comme une simple relation maitre-esclave. Pour que cela devienne une réalité, le robot a besoin de comprendre le comportement humain. Il ne lui suffit pas de réagir de manière appropriée, il lui faut également être socialement proactif. Pour que ce comportement puis être mise en pratique le roboticien doit s'inspirer de la littérature déjà riche en sciences sociocognitives chez l'homme. Dans ce travail, nous allons identifier les éléments clés d'une telle interaction dans le contexte d'une tâche commune, avec un accent particulier sur la façon dont l'homme doit collaborer pour réaliser avec succès une action commune. Nous allons montrer l'application de ces éléments au cas un système robotique afin d'enrichir les interactions sociales homme-robot pour la prise de décision. A cet égard, une contribution a la gestion du but de haut niveau de robot et le comportement proactif est montre. La description d'un modèle décisionnel d'collaboration pour une tâche collaboratif avec l'humain est donnée. Ainsi, l'étude de l'interaction homme robot montre l'intéret de bien choisir le moment d'une action de communication lors des activités conjointes avec l'humain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ali, Muhammad. „Contributions to decisional human-robot interaction : towards collaborative robot companions“. Thesis, Toulouse, INSA, 2012. http://www.theses.fr/2012ISAT0003/document.

Der volle Inhalt der Quelle
Annotation:
L'interaction homme-robot arrive dans une phase intéressante ou la relation entre un homme et un robot est envisage comme 'un partenariat plutôt que comme une simple relation maitre-esclave. Pour que cela devienne une réalité, le robot a besoin de comprendre le comportement humain. Il ne lui suffit pas de réagir de manière appropriée, il lui faut également être socialement proactif. Pour que ce comportement puis être mise en pratique le roboticien doit s'inspirer de la littérature déjà riche en sciences sociocognitives chez l'homme. Dans ce travail, nous allons identifier les éléments clés d'une telle interaction dans le contexte d'une tâche commune, avec un accent particulier sur la façon dont l'homme doit collaborer pour réaliser avec succès une action commune. Nous allons montrer l'application de ces éléments au cas un système robotique afin d'enrichir les interactions sociales homme-robot pour la prise de décision. A cet égard, une contribution a la gestion du but de haut niveau de robot et le comportement proactif est montre. La description d'un modèle décisionnel d'collaboration pour une tâche collaboratif avec l'humain est donnée. Ainsi, l'étude de l'interaction homme robot montre l'intéret de bien choisir le moment d'une action de communication lors des activités conjointes avec l'humain
Human Robot Interaction is entering into the interesting phase where the relationship with a robot is envisioned more as one of companionship with the human partner than a mere master-slave relationship. For this to become a reality, the robot needs to understand human behavior and not only react appropriately but also be socially proactive. A Companion Robot will also need to collaborate with the human in his daily life and will require a reasoning mechanism to manage thecollaboration and also handle the uncertainty in the human intention to engage and collaborate. In this work, we will identify key elements of such interaction in the context of a collaborative activity, with special focus on how humans successfully collaborate to achieve a joint action. We will show application of these elements in a robotic system to enrich its social human robot interaction aspect of decision making. In this respect, we provide a contribution to managing robot high-level goals and proactive behavior and a description of a coactivity decision model for collaborative human robot task. Also, a HRI user study demonstrates the importance of timing a verbal communication in a proactive human robot joint action
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Alili, Samir. „Interaction décisionnelle Homme-Robot : planification de tâche pour un robot interactif en environnement humain“. Phd thesis, Université Paul Sabatier - Toulouse III, 2011. http://tel.archives-ouvertes.fr/tel-01068811.

Der volle Inhalt der Quelle
Annotation:
Cette thèse aborde la problématique de la décision partagée homme-robot dans une perspective de résolution interactive de problème à laquelle prennent part l'homme et le robot. Le robot et l'homme poursuivent des objectifs communs et doivent déterminer ensemble les moyens de les réaliser (les capacités et les compétences de chacun étant différentes). Les questions à traiter concernent ce partage des rôles, le partage d'autorité dans l'exécution d'une tche (prise d'initiative), les connaissances à exhiber afin que l'un et l'autre puissent jouer un rôle optimal dans la résolution du problème commun. Nous avons développé un planificateur de tâche nommé HATP (Human Aware Task Planner). Ce planificateur est conçu sur la base de la planification hiérarchique qu'on a enrichie avec des règles sociales. Il permet de produire des plans qui sont socialement acceptables, c'est-à-dire des plans qui rendent lisibles les actions et les intentions du robot. Le planificateur a également la capacité de planifier pour le robot et l'humain tout en garantissant l'optimalité pour chacun d'eux. Nous nous sommes également intéressés à une approche hybride, qui mixe la planification de tâche à la planification géométrique. Cette approche permet au robot d'avoir un contrôle sur la séquence d'actions qu'il produit mais également sur la façon de la réaliser. Ce qui permet de traiter le problème de l'interaction homme-robot de manière plus fine mais également sur plusieurs niveaux.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Alili, Samir. „Interaction décisionnelle homme-robot : planification de tâche pour un robot interactif en environnement humain“. Phd thesis, Toulouse 3, 2011. http://thesesups.ups-tlse.fr/2663/.

Der volle Inhalt der Quelle
Annotation:
Cette thèse aborde la problématique de la décision partagée homme-robot dans une perspective de résolution interactive de problème à laquelle prennent part l'homme et le robot. Le robot et l'homme poursuivent des objectifs communs et doivent déterminer ensemble les moyens de les réaliser (les capacités et les compétences de chacun étant différentes). Les questions à traiter concernent ce partage des rôles, le partage d'autorité dans l'exécution d'une tâche (prise d'initiative), les connaissances à exhiber afin que l'un et l'autre puissent jouer un rôle optimal dans la résolution du problème commun. Nous avons développé un planificateur de tâche nommé HATP (Human Aware Task Planner). Ce planificateur est conçu sur la base de la planification hiérarchique qu'on a enrichie avec des règles sociales. Il permet de produire des plans qui sont socialement acceptables, c'est-à-dire des plans qui rendent lisibles les actions et les intentions du robot. Le planificateur a également la capacité de planifier pour le robot et l'humain tout en garantissant l'optimalité pour chacun d'eux. Nous nous sommes également intéressés à une approche hybride, qui mixe la planification de tâche à la planification géométrique. Cette approche permet au robot d'avoir un contrôle sur la séquence d'actions qu'il produit mais également sur la façon de la réaliser. Ce qui permet de traiter le problème de l'interaction homme-robot de manière plus fine mais également sur plusieurs niveaux
This thesis addresses the problem of the shared decision between human and robot in the perspective of interactive problem solving that involved human and robot. The robot and human share common goals and must work together to identify how to realize (the capacity and the competence of each one are different). Issues to be addressed concerning this division of roles, sharing of authority in the execution of a task (taking initiative), to exhibit the knowledge such that both can play an optimal role in the resolution of common problems. We developed a task planner named HATP (Human Aware Task Planner). This planner is based on hierarchical task planning that is enriched with social rules. It can produce plans that are socially acceptable that means plans that make legible the actions and intentions of the robot. The planner also has the ability to plan for the robot and humans while ensuring optimality for each. We are also interested in a hybrid approach that mixes between task planning and geometrical planning. This approach allows the robot to have control over the sequence of actions that it produces, but also on how to achieve it. Thereby treat the human-robot interaction problem more cleverly, but also on several levels
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kruse, Thibault. „Planning for human robot interaction“. Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30059/document.

Der volle Inhalt der Quelle
Annotation:
Les avancées récentes en robotique inspirent des visions de robots domestiques et de service rendant nos vies plus faciles et plus confortables. De tels robots pourront exécuter différentes tâches de manipulation d'objets nécessaires pour des travaux de ménage, de façon autonome ou en coopération avec des humains. Dans ce rôle de compagnon humain, le robot doit répondre à de nombreuses exigences additionnelles comparées aux domaines bien établis de la robotique industrielle. Le but de la planification pour les robots est de parvenir à élaborer un comportement visant à satisfaire un but et qui obtient des résultats désirés et dans de bonnes conditions d'efficacité. Mais dans l'interaction homme-robot (HRI), le comportement robot ne peut pas simplement être jugé en termes de résultats corrects, mais il doit être agréable aux acteurs humains. Cela signifie que le comportement du robot doit obéir à des critères de qualité supplémentaire. Il doit être sûr, confortable pour l'homme, et être intuitivement compris. Il existe des pratiques pour assurer la sécurité et offrir un confort en gardant des distances suffisantes entre le robot et des personnes à proximité. Toutefois fournir un comportement qui est intuitivement compris reste un défi. Ce défi augmente considérablement dans les situations d'interaction homme-robot dynamique, où les actions de la personne sont imprévisibles, le robot devant adapter en permanence ses plans aux changements. Cette thèse propose une approche nouvelle et des méthodes pour améliorer la lisibilité du comportement du robot dans des situations dynamiques. Cette approche ne considère pas seulement la qualité d'un seul plan, mais le comportement du robot qui est parfois le résultat de replanifications répétées au cours d'une interaction. Pour ce qui concerne les tâches de navigation, cette thèse présente des fonctions de coûts directionnels qui évitent les problèmes dans des situations de conflit. Pour la planification d'action en général, cette thèse propose une approche de replanification locale des actions de transport basé sur les coûts de navigation, pour élaborer un comportement opportuniste adaptatif. Les deux approches, complémentaires, facilitent la compréhension, par les acteurs et observateurs humains, des intentions du robot et permettent de réduire leur confusion
The recent advances in robotics inspire visions of household and service robots making our lives easier and more comfortable. Such robots will be able to perform several object manipulation tasks required for household chores, autonomously or in cooperation with humans. In that role of human companion, the robot has to satisfy many additional requirements compared to well established fields of industrial robotics. The purpose of planning for robots is to achieve robot behavior that is goal-directed and establishes correct results. But in human-robot-interaction, robot behavior cannot merely be judged in terms of correct results, but must be agree-able to human stakeholders. This means that the robot behavior must suffice additional quality criteria. It must be safe, comfortable to human, and intuitively be understood. There are established practices to ensure safety and provide comfort by keeping sufficient distances between the robot and nearby persons. However providing behavior that is intuitively understood remains a challenge. This challenge greatly increases in cases of dynamic human-robot interactions, where the actions of the human in the future are unpredictable, and the robot needs to constantly adapt its plans to changes. This thesis provides novel approaches to improve the legibility of robot behavior in such dynamic situations. Key to that approach is not to merely consider the quality of a single plan, but the behavior of the robot as a result of replanning multiple times during an interaction. For navigation planning, this thesis introduces directional cost functions that avoid problems in conflict situations. For action planning, this thesis provides the approach of local replanning of transport actions based on navigational costs, to provide opportunistic behavior. Both measures help human observers understand the robot's beliefs and intentions during interactions and reduce confusion
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bodiroža, Saša. „Gestures in human-robot interaction“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17705.

Der volle Inhalt der Quelle
Annotation:
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt.
Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ahmed, Muhammad Rehan. „Compliance Control of Robot Manipulator for Safe Physical Human Robot Interaction“. Doctoral thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-13986.

Der volle Inhalt der Quelle
Annotation:
Inspiration from biological systems suggests that robots should demonstrate same level of capabilities that are embedded in biological systems in performing safe and successful interaction with the humans. The major challenge in physical human robot interaction tasks in anthropic environment is the safe sharing of robot work space such that robot will not cause harm or injury to the human under any operating condition. Embedding human like adaptable compliance characteristics into robot manipulators can provide safe physical human robot interaction in constrained motion tasks. In robotics, this property can be achieved by using active, passive and semi active compliant actuation devices. Traditional methods of active and passive compliance lead to complex control systems and complex mechanical design. In this thesis we present compliant robot manipulator system with semi active compliant device having magneto rheological fluid based actuation mechanism. Human like adaptable compliance is achieved by controlling the properties of the magneto rheological fluid inside joint actuator. This method offers high operational accuracy, intrinsic safety and high absorption to impacts. Safety is assured by mechanism design rather than by conventional approach based on advance control. Control schemes for implementing adaptable compliance are implemented in parallel with the robot motion control that brings much simple interaction control strategy compared to other methods. Here we address two main issues: human robot collision safety and robot motion performance.We present existing human robot collision safety standards and evaluate the proposed actuation mechanism on the basis of static and dynamic collision tests. Static collision safety analysis is based on Yamada’s safety criterion and the adaptable compliance control scheme keeps the robot in the safe region of operation. For the dynamic collision safety analysis, Yamada’s impact force criterion and head injury criterion are employed. Experimental results validate the effectiveness of our solution. In addition, the results with head injury criterion showed the need to investigate human bio-mechanics in more details in order to acquire adequate knowledge for estimating the injury severity index for robots interacting with humans. We analyzed the robot motion performance in several physical human robot interaction tasks. Three interaction scenarios are studied to simulate human robot physical contact in direct and inadvertent contact situations. Respective control disciplines for the joint actuators are designed and implemented with much simplified adaptable compliance control scheme. The series of experimental tests in direct and inadvertent contact situations validate our solution of implementing human like adaptable compliance during robot motion and prove the safe interaction with humans in anthropic domains.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Toris, Russell C. „Bringing Human-Robot Interaction Studies Online via the Robot Management System“. Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/1058.

Der volle Inhalt der Quelle
Annotation:
"Human-Robot Interaction (HRI) is a rapidly expanding field of study that focuses on allowing non-roboticist users to naturally and effectively interact with robots. The importance of conducting extensive user studies has become a fundamental component of HRI research; however, due to the nature of robotics research, such studies often become expensive, time consuming, and limited to constrained demographics. This work presents the Robot Management System, a novel framework for bringing robotic experiments to the web. A detailed description of the open source system, an outline of new security measures, and a use case study of the RMS as a means of conducting user studies is presented. Using a series of navigation and manipulation tasks with a PR2 robot, three user study conditions are compared: users that are co-present with the robot, users that are recruited to the university lab but control the robot from a different room, and remote web-based users. The findings show little statistical differences between usability patterns across these groups, further supporting the use of web-based crowdsourcing techniques for certain types of HRI evaluations."
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nitz, Pettersson Hannes, und Samuel Vikström. „VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.

Der volle Inhalt der Quelle
Annotation:
The demand for robots to work in environments together with humans is growing. This calls for new requirements on robots systems, such as the need to be perceived as responsive and accurate in human interactions. This thesis explores the possibility of using AI methods to predict the movement of a human and evaluating if that information can assist a robot with human interactions. The AI methods that were used is a Long Short Term Memory(LSTM) network and an artificial neural network(ANN). Both networks were trained on data from a motion capture dataset and on four different prediction times: 1/2, 1/4, 1/8 and a 1/16 second. The evaluation was performed directly on the dataset to determine the prediction error. The neural networks were also evaluated on a robotic arm in a simulated environment, to show if the prediction methods would be suitable for a real-life system. Both methods show promising results when comparing the prediction error. From the simulated system, it could be concluded that with the LSTM prediction the robotic arm would generally precede the actual position. The results indicate that the methods described in this thesis report could be used as a stepping stone for a human-robot interactive system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Clodic, Aurelie. „Supervision pour un robot interactif: action et interaction pour un robot autonome en environnement humain“. Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00196608.

Der volle Inhalt der Quelle
Annotation:
Nos travaux portent sur la supervision d'un robot autonome en environnement humain et plus particulièrement sur la prise en compte de l'interaction homme-robot au niveau décisionnel. Dans ce cadre, l'homme et le robot constituent un système dans lequel ils partagent un espace commun et échangent des informations à travers différentes modalités. L'interaction peut intervenir soit sur une requête explicite de l'homme, soit parce que le robot l'a estimée utile et en a pris l'initiative. Dans les deux cas le robot doit agir afin de satisfaire un but en prenant en compte de manière explicite la présence et les préferences de son partenaire humain. Pour cela, nous avons conçu et développé un système de supervision nommé Shary qui permet de gérer des tâches individuelles (où seul le robot est impliqué) et des tâches jointes (où le robot et un autre agent sont impliqués). Ce système se compose d'une mécanique d'exécution de tâches gérant d'une part la communication nécessaire au sein d'une tâche et d'autre part l'affinement de la tâche en sous-tâches. Pour cela chaque tâche est définie par un plan de communication et un plan de réalisation auxquels sont associés des moniteurs de suivi. Nous avons développé ce système sur le robot Rackham dans le cadre d'une tâche de "Guide de musée". Nous avons également utilisé ce système dans le cadre d'une tâche du type : "apporter quelque chose à quelqu'un" sur le robot Jido pour montrer la généricité du système.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Clodic, Aurélie. „Supervision pour un robot interactif : action et interaction pour un robot autonome en environnement humain“. Toulouse 3, 2007. http://www.theses.fr/2007TOU30248.

Der volle Inhalt der Quelle
Annotation:
Nos travaux portent sur la supervision d'un robot autonome en environnement humain et plus particulièrement sur la prise en compte de l'interaction homme-robot au niveau décisionnel. Dans ce cadre, l'homme et le robot constituent un système dans lequel ils partagent un espace commun et échangent des informations à travers différentes modalités. L'interaction peut intervenir soit sur une requête explicite de l'homme, soit parce que le robot l'a estimée utile et en a pris l'initiative. Dans les deux cas le robot doit agir afin de satisfaire un but en prenant en compte de manière explicite la présence et les préferences de son partenaire humain. Pour cela, nous avons conçu et développé un système de supervision nommé Shary qui permet de gérer des tâches individuelles (où seul le robot est impliqué) et des tâches jointes (où le robot et un autre agent sont impliqués). Ce système se compose d'une mécanique d'exécution de tâches gérant d'une part la communication nécessaire au sein d'une tâche et d'autre part l'affinement de la tâche en sous-tâches. Pour cela chaque tâche est définie par un plan de communication et un plan de réalisation auxquels sont associés des moniteurs de suivi. Nous avons développé ce système sur le robot Rackham dans le cadre d'une tâche de "Guide de musée". Nous avons également utilisé ce système dans le cadre d'une tâche du type : "apporter quelque chose à quelqu'un" sur le robot Jido pour montrer la généricité du système
Human-robot collaborative task achievement requires specific task supervision and execution. In order to close the loop with their human partners robots must maintain an interaction stream in order to communicate their own intentions and beliefs and to monitor the activity of their human partner. In this work we introduce SHARY, a supervisor dedicated to collaborative task achievement in the human robot interaction context. The system deals for one part with task refinement and on the other part with communication needed in the human-robot interaction context. To this end, each task is defined at a communication level and at an execution level. This system has been developped on the robot Rackham for a tour-guide demonstration and has then be used on the robot Jido for a task of fetch and carry to demonstrate system genericity
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Puehn, Christian G. „Development of a Low-Cost Social Robot for Personalized Human-Robot Interaction“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1427889195.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Topp, Elin Anna. „Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping“. Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Sunardi, Mathias I. „Expressive Motion Synthesis for Robot Actors in Robot Theatre“. PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/720.

Der volle Inhalt der Quelle
Annotation:
Lately, personal and entertainment robotics are becoming more and more common. In this thesis, the application of entertainment robots in the context of a Robot Theatre is studied. Specifically, the thesis focuses on the synthesis of expressive movements or animations for the robot performers (Robot Actors). The novel paradigm emerged from computer animation is to represent the motion data as a set of signals. Thus, preprogrammed motion data can be quickly modified using common signal processing techniques such as multiresolution filtering and spectral analysis. However, manual adjustments of the filtering and spectral methods parameters, and good artistic skills are still required to obtain the desired expressions in the resulting animation. Music contains timing, timbre and rhythm information which humans can translate into affect, and express the affect through movement dynamics, such as in dancing. Music data is then assumed to contain affective information which can be expressed in the movements of a robot. In this thesis, music data is used as input signal to generate motion data (Dance) and to modify a sequence of pre-programmed motion data (Scenario) for a custom-made Lynxmotion robot and a KHR-1 robot, respectively. The music data in MIDI format is parsed for timing and melodic information, which are then mapped to joint angle values. Surveys were done to validate the usefulness and contribution of music signals to add expressiveness to the movements of a robot for the Robot Theatre application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Huang, Chien-Ming. „Joint attention in human-robot interaction“. Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.

Der volle Inhalt der Quelle
Annotation:
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Bremner, Paul. „Conversational gestures in human-robot interaction“. Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557106.

Der volle Inhalt der Quelle
Annotation:
Humanoid service robotics is a rapidly developing field of research. One desired purpose of such service robots is for them to be able to interact and cooperate with people. In order for them to be able to do so successfully they need to be able to communicate effectively. One way of achieving this is for humanoid robots to communicate in a human-like way resulting in easier, more familiar and ultimately more successful human-robot interaction. An integral part of human communications is co-verbal gesture; thus, investigation into a means of their production and whether they engender the desired effects is undertaken in this thesis. In order for gestures to be able to be produced using BERTI (Bristol and Elumotion Robotic Torso I), the robot designed and built for this work, a means of coordinating the joints to produce the required hand motions was necessary. A relatively simple method for doing so is proposed which produces motion that shares characteristics with proposed mathematical models for human arm movements, i.e., smooth and direct motion. It was then investigated whether, as hypothesised, gestures produced using this method were recognisable and positively perceived by users. A series of user studies showed that the gestures were indeed as recognisable as their human counterparts, and positively perceived. In order to enable users to form more confident opinions of the gestures, investigate whether improvements in human-likeness would affect user perceptions, and enable investigation into the affects of robotic gestures on listener behaviour, methods for producing gesture sequences were developed. Sufficient procedural information for gesture production was not present in the anthropological literature, so empirical evidence was sought from monologue performances. This resulted in a novel set of rules for production of beat gestures (a key type of co-verbal gesture), as well as some other important procedural methods; these were used to produce a two minute monologue with accompanying gestures. A user study carried out using this monologue reinforced the previous finding that positively perceived gestures were produced. It also showed that gesture sequences using beat gestures generated using the rules, were not significantly preferable to those containing only naively selected pre-scripted beat gestures. This demonstrated that minor improvements in human-likeness offered no significant benefit in user perception. Gestures have been shown to have positive effects on listener engagement and memory (of the accompanied speech) in anthropological studies. In this thesis the hypothesis that similar effects would be observed when BERTI performed co-verbal gestures was investigated. It was found that there was a highly significant improvement in user engagement, as well as a significant improvement in certainty of data recalled. Thus, some of the expected effects of co-verbal gesture were observed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Fiore, Michelangelo. „Decision Making in Human-Robot Interaction“. Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0049/document.

Der volle Inhalt der Quelle
Annotation:
Un intérêt croissant est aujourd'hui porté sur les robots capables de conduire des activités de collaboration d'une manière naturelle et efficace. Nous avons développé une architecture et un système qui traitent des aspects décisionnels de ce problème. Nous avons mis en oeuvre cette architecture pour traiter trois problèmes différents: le robot observateur, le robot équipier et enfin le robot instructeur. Dans cette thèse, nous discutons des défis et problématiques de la coopération homme-robot, puis nous décrivons l'architecture que nous avons développée et enfin détaillons sa mise oeuvre et les algorithmiques spécifiques à chacun des scénarios.Dans le cadre du scénario du robot observateur, le robot maintient un état du monde à jour au moyen d'un raisonnement géométrique effectué sur les données de perception, produisant ainsi une description symbolique de l'état du monde et des agents présents. Nous montrons également, sur la base d'un système de raisonnement intégrant des processus de décision de Markov (MDPs) et des réseaux Bayésiens, comment le robot est capable d'inférer les intentions et les actions futures de ses partenaires humain, à partir d'une observation de leurs mouvements relatifs aux objets de l'environnement. Nous identifions deux types de comportements proactifs : corriger les croyances de l'homme en lui fournissant l'information pertinente qui lui permettra de réaliser son but, aider physiquement la personne dans la réalisation de sa tâche, une fois celle-ci identifiée par le robot.Dans le cas du robot équipier, ce dernier doir réaliser une tâche en coopération avec un partenaire human. Nous introduisons un planificateur nommé Human-Aware Task Planner et détaillons la gestion par notre systeme du plan partagé par un composant appelé Plan Management component. Grâce à se système, le robot peut collaborer avec les hommes selon trois modalités différentes : robot leader, human leader, ou equal partners. Nous discutons des fonctions qui permettent au robot de suivre les actions de son partenaire humain et de vérifier qu'elles sont compatibles ou non avec le plan partagé et nous montrons comment le robot est capable de produire des comportements sûrs qui permettent de réaliser la tâche en prenant en compte de manière explicite la présence et les actions de l'homme ainsi que ses préférences. L'approche est fondée sur des processus décisionnels de Markov hiérarchisés avec observabilité mixte et permet d'estimer l'engagement de l'homme et de réagir en conséquence à différents niveaux d'abstraction. Enfin, nous discutions d'une approche prospective fondée sur un planificateur multi-agent probabiliste mettant en œuvre des MDPs et de sa pertinence quant à l'amélioration du composant de gestion de plan partagé.Dans le scénario du robot instructeur, nous détaillons les processus décisionnels qui permettent au robot d'adapter le plan partagé (shared plan) en fonction de l'état de connaissance et des désirs de son partenaire humain. Selon, le cas, le robot donne plus ou moins de détails sur le plan et adapte son comportement aux connaissances de l'homme ; Une étude utilisateur a également été menée permettant de valider la pertinence de cette approche.Finalement, nous présentons la mise en œuvre d'un robot guide autonome et détaillons les processu décisionnels que nous y avons intégrés pour lui permettre de guider des voyageurs dans un hall d'aéroport en s'adaptant au mieux au contexte et aux désirs des personnes guidées. Nous illustrons dans ce contexte des comportement adaptatifs et pro-actifs. Ce système a été effectivement intégré sur le robot Spencer qui a été déployé dans le terminal principal de l'aéroport d'Amsterdam (Schiphol). Le robot a fonctionné de manière robuste et satisfaisante. Une étude utilisateur a permis, dans ce cas également, de mesurer les performances et de valider le système
There has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Alanenpää, Madelene. „Gaze detection in human-robot interaction“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.

Der volle Inhalt der Quelle
Annotation:
The aim of this thesis is to track gaze direction in a human-robot interaction scenario.The human-robot interaction consisted of a participant playing a geographic gamewith three important objects on which participants could focus: A tablet, a sharedtouchscreen, and a robot (called Furhat). During the game, the participant wasequipped with eye-tracking glasses. These collected a first-person view video as wellas annotations consisting of the participant's center of gaze. In this thesis, I aim to usethis data to detect the three important objects described above from the first-personvideo stream and discriminate whether the gaze of the person fell on one of theobjects of importance and for how long. To achieve this, I trained an accurate and faststate-of-the-art object detector called YOLOv4. To ascertain that this was thecorrect object detector, for this thesis, I compared YOLOv4 with its previousversion, YOLOv3, in terms of accuracy and run time. YOLOv4 was trained with adata set of 337 images consisting of various pictures of tablets, television screens andthe Furhat robot.The trained program was used to extract the relevant objects for each frame of theeye-tracking video and a parser was used to discriminate whether the gaze of theparticipant fell on the relevant objects and for how long. The result is a system thatcould determine, with an accuracy of 90.03%, what object the participant is looking atand for how long the participant is looking at that object.Tryckt av:
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Almeida, Luís Miguel Martins. „Human-robot interaction for object transfer“. Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22374.

Der volle Inhalt der Quelle
Annotation:
Mestrado em Engenharia Mecânica
Robots come into physical contact with humans under a variety of circumstances to perform useful work. This thesis has the ambitious aim of contriving a solution that leads to a simple case of physical human-robot interaction, an object transfer task. Firstly, this work presents a review of the current research within the field of Human-Robot Interaction, where two approaches are distinguished, but simultaneously required: a pre-contact approximation and an interaction by contact. Further, to achieve the proposed objectives, this dissertation addresses a possible answer to three major problems: (1) The robot control to perform the inherent movements of the transfer assignment, (2) the human-robot pre interaction and (3) the interaction by contact. The capabilities of a 3D sensor and force/tactile sensors are explored in order to prepare the robot to handover an object and to control the robot gripper actions, correspondingly. The complete software development is supported by the Robot Operating System (ROS) framework. Finally, some experimental tests are conducted to validate the proposed solutions and to evaluate the system's performance. A possible transfer task is achieved, even if some refinements, improvements and extensions are required to improve the solution performance and range.
Os robôs entram em contacto físico com os humanos sob uma variedade de circunstâncias para realizar trabalho útil. Esta dissertação tem como objetivo o desenvolvimento de uma solução que permita um caso simples de interação física humano-robô, uma tarefa de transferência de objetos. Inicialmente, este trabalho apresenta uma revisão da pesquisa corrente na área da interação humano-robô, onde duas abordagens são distinguíveis, mas simultaneamente necessárias: uma aproximação pré-contacto e uma interação pós-contacto. Seguindo esta linha de pensamento, para atingir os objetivos propostos, esta dissertação procura dar resposta a três grandes problemas: (1) O controlo do robô para que este desempenhe os movimentos inerentes á tarefa de transferência, (2) a pré-interação humano-robô e (3) a interação por contacto. As capacidades de um sensor 3D e de sensores de força são exploradas com o objetivo de preparar o robô para a transferência e de controlar as ações da garra robótica, correspondentemente. O desenvolvimento de arquitetura software é suportado pela estrutura Robot Operating System (ROS). Finalmente, alguns testes experimentais são realizados para validar as soluções propostas e para avaliar o desempenho do sistema. Uma possível transferência de objetos é alcançada, mesmo que sejam necessários alguns refinamentos, melhorias e extensões para melhorar o desempenho e abrangência da solução.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Khambhaita, Harmish. „Human-aware space sharing and navigation for an interactive robot“. Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30399.

Der volle Inhalt der Quelle
Annotation:
Les méthodes de planification de mouvements robotiques se sont développées à un rythme accéléré ces dernières années. L'accent a principalement été mis sur le fait de rendre les robots plus efficaces, plus sécurisés et plus rapides à réagir à des situations imprévisibles. En conséquence, nous assistons de plus en plus à l'introduction des robots de service dans notre vie quotidienne, en particulier dans les lieux publics tels que les musées, les centres commerciaux et les aéroports. Tandis qu'un robot de service mobile se déplace dans l'environnement humain, il est important de prendre en compte l'effet de son comportement sur les personnes qu'il croise ou avec lesquelles il interagit. Nous ne les voyons pas comme de simples machines, mais comme des agents sociaux et nous nous attendons à ce qu'ils se comportent de manière similaire à l'homme en suivant les normes sociétales comme des règles. Ceci a créé de nouveaux défis et a ouvert de nouvelles directions de recherche pour concevoir des algorithmes de commande de robot, qui fournissent des comportements de robot acceptables, lisibles et proactifs. Cette thèse propose une méthode coopérative basée sur l'optimisation pour la planification de trajectoire et la navigation du robot avec des contraintes sociales intégrées pour assurer des mouvements de robots prudents, conscients de la présence de l'être humain et prévisibles. La trajectoire du robot est ajustée dynamiquement et continuellement pour satisfaire ces contraintes sociales. Pour ce faire, nous traitons la trajectoire du robot comme une bande élastique (une construction mathématique représentant la trajectoire du robot comme une série de positions et une différence de temps entre ces positions) qui peut être déformée (dans l'espace et dans le temps) par le processus d'optimisation pour respecter les contraintes données. De plus, le robot prédit aussi les trajectoires humaines plausibles dans la même zone d'exploitation en traitant les chemins humains aussi comme des bandes élastiques. Ce système nous permet d'optimiser les trajectoires des robots non seulement pour le moment présent, mais aussi pour l'interaction entière qui se produit lorsque les humains et les robots se croisent les uns les autres. Nous avons réalisé un ensemble d'expériences avec des situations interactives humains-robots qui se produisent dans la vie de tous les jours telles que traverser un couloir, passer par une porte et se croiser sur de grands espaces ouverts. La méthode de planification coopérative proposée se compare favorablement à d'autres schémas de planification de la navigation à la pointe de la technique. Nous avons augmenté le comportement de navigation du robot avec un mouvement synchronisé et réactif de sa tête. Cela permet au robot de regarder où il va et occasionnellement de détourner son regard vers les personnes voisines pour montrer que le robot va éviter toute collision possible avec eux comme prévu par le planificateur. À tout moment, le robot pondère les multiples critères selon le contexte social et décide de ce vers quoi il devrait porter le regard. Grâce à une étude utilisateur en ligne, nous avons montré que ce mécanisme de regard complète efficacement le comportement de navigation ce qui améliore la lisibilité des actions du robot. Enfin, nous avons intégré notre schéma de navigation avec un système de supervision plus large qui peut générer conjointement des comportements du robot standard tel que l'approche d'une personne et l'adaptation de la vitesse du robot selon le groupe de personnes que le robot guide dans des scénarios d'aéroport ou de musée
The methods of robotic movement planning have grown at an accelerated pace in recent years. The emphasis has mainly been on making robots more efficient, safer and react faster to unpredictable situations. As a result we are witnessing more and more service robots introduced in our everyday lives, especially in public places such as museums, shopping malls and airports. While a mobile service robot moves in a human environment, it leaves an innate effect on people about its demeanor. We do not see them as mere machines but as social agents and expect them to behave humanly by following societal norms and rules. This has created new challenges and opened new research avenues for designing robot control algorithms that deliver human-acceptable, legible and proactive robot behaviors. This thesis proposes a optimization-based cooperative method for trajectoryplanning and navigation with in-built social constraints for keeping robot motions safe, human-aware and predictable. The robot trajectory is dynamically and continuously adjusted to satisfy these social constraints. To do so, we treat the robot trajectory as an elastic band (a mathematical construct representing the robot path as a series of poses and time-difference between those poses) which can be deformed (both in space and time) by the optimization process to respect given constraints. Moreover, we also predict plausible human trajectories in the same operating area by treating human paths also as elastic bands. This scheme allows us to optimize the robot trajectories not only for the current moment but for the entire interaction that happens when humans and robot cross each other's paths. We carried out a set of experiments with canonical human-robot interactive situations that happen in our everyday lives such as crossing a hallway, passing through a door and intersecting paths on wide open spaces. The proposed cooperative planning method compares favorably against other stat-of-the-art human-aware navigation planning schemes. We have augmented robot navigation behavior with synchronized and responsive movements of the robot head, making the robot look where it is going and occasionally diverting its gaze towards nearby people to acknowledge that robot will avoid any possible collision with them. At any given moment the robot weighs multiple criteria according to the social context and decides where it should turn its gaze. Through an online user study we have shown that such gazing mechanism effectively complements the navigation behavior and it improves legibility of the robot actions. Finally, we have integrated our navigation scheme with a broader supervision system which can jointly generate normative robot behaviors such as approaching a person and adapting the robot speed according to a group of people who the robot guides in airports or museums
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Collins, E. C. „Towards robot-assisted therapy : identifying mechanisms of effect in human-biomimetic robot interaction“. Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/16680/.

Der volle Inhalt der Quelle
Annotation:
This thesis provides a framework for understanding human-robot relationships based on human-other bonds. It focuses on human-animal interactions and the positive effects of Animal-Assisted Therapy (AAT), proposing that Robot-Assisted Therapy (RAT), with biomimetic robots, could benefit from a better understanding of the mechanisms of effect driving positive AAT outcomes. In sum, interactions with biomimetic robots could provide benefits mechanistically comparable to those provided by AAT animals, independent of individual differences in personality or culture. Evidence is provided showing that interaction with a PARO therapeutic robot led to a positive change in users' well-being, measured via Felt Security (FS). Intimate interactions with PARO, such as stroking the unit, produced greater increases in user FS, independent of individual differences in caregiving and attachment styles. The Felt Security Scale (FSS) was translated into Japanese creating the JFSS. This was used in a cross-cultural study (Japan/UK) which demonstrated that the biomimetic robot MIRO does not have to display predictable behaviour in order to have a positive impact on a user's FS. These results were found in both the UK and Japan despite the different culturally-driven expectations of robot-acceptance in the two countries. Although an interaction with both PARO and MIRO increased user FS, these scores were significantly higher when interacting with PARO.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Kaupp, Tobias. „Probabilistic Human-Robot Information Fusion“. Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/2554.

Der volle Inhalt der Quelle
Annotation:
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Kaupp, Tobias. „Probabilistic Human-Robot Information Fusion“. University of Sydney, 2008. http://hdl.handle.net/2123/2554.

Der volle Inhalt der Quelle
Annotation:
PhD
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Jou, Yung-Tsan. „Human-Robot Interactive Control“. Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1082060744.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Burke, Michael Glen. „Fast upper body pose estimation for human-robot interaction“. Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/256305.

Der volle Inhalt der Quelle
Annotation:
This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in human-robot interaction in mind. A novel mixture of Ornstein-Uhlenbeck processes model, trained in a reduced dimensional subspace and designed for analytical tractability, is introduced. This model acts as a collection of mean-reverting random walks that pull towards more commonly observed poses. Pose tracking using this model can be Rao-Blackwellised, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. The model is used within a recursive Bayesian framework to provide reliable estimates of upper body pose when only a subset of body joints can be detected. Model training data can be extended through a retargeting process, and better pose coverage obtained through the use of Poisson disk sampling in the model training stage. Results on a number of test datasets show that the proposed approach provides pose estimation accuracy comparable with the state of the art in real time (30 fps) and can be extended to the multiple user case. As a motivating example, this work also introduces a pantomimic gesture recognition interface. Traditional approaches to gesture recognition for robot control make use of predefined codebooks of gestures, which are mapped directly to the robot behaviours they are intended to elicit. These gesture codewords are typically recognised using algorithms trained on multiple recordings of people performing the predefined gestures. Obtaining these recordings can be expensive and time consuming, and the codebook of gestures may not be particularly intuitive. This thesis presents arguments that pantomimic gestures, which mimic the intended robot behaviours directly, are potentially more intuitive, and proposes a transfer learning approach to recognition, where human hand gestures are mapped to recordings of robot behaviour by extracting temporal and spatial features that are inherently present in both pantomimed actions and robot behaviours. A Bayesian bias compensation scheme is introduced to compensate for potential classification bias in features. Results from a quadrotor behaviour selection problem show that good classification accuracy can be obtained when human hand gestures are recognised using behaviour recordings, and that classification using these behaviour recordings is more robust than using human hand recordings when users are allowed complete freedom over their choice of input gestures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Fonooni, Benjamin. „Cognitive Interactive Robot Learning“. Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-97422.

Der volle Inhalt der Quelle
Annotation:
Building general purpose autonomous robots that suit a wide range of user-specified applications, requires a leap from today's task-specific machines to more flexible and general ones. To achieve this goal, one should move from traditional preprogrammed robots to learning robots that easily can acquire new skills. Learning from Demonstration (LfD) and Imitation Learning (IL), in which the robot learns by observing a human or robot tutor, are among the most popular learning techniques. Showing the robot how to perform a task is often more natural and intuitive than figuring out how to modify a complex control program. However, teaching robots new skills such that they can reproduce the acquired skills under any circumstances, on the right time and in an appropriate way, require good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to learn new skills correctly. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions and an ability to generalize to new situations are essential. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the robot's repertoire. The goal of this thesis is to develop methods for learning from demonstration that mainly focus on understanding the tutor's intentions, and recognizing which elements of a demonstration need the robot's attention. An architecture containing required cognitive functions for learning and reproduction of high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. Another major contribution of this thesis is methods to resolve ambiguities in demonstrations where the tutor's intentions are not clearly expressed and several demonstrations are required to infer intentions correctly. The provided solution is inspired by human memory models and priming mechanisms that give the robot clues that increase the probability of inferring intentions correctly. In addition to robot learning, the developed techniques are applied to a shared control system based on visual servoing guided behaviors and priming mechanisms. The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of intentions in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.
Att bygga autonoma robotar som passar ett stort antal olika användardefinierade applikationer kräver ett språng från dagens specialiserade maskiner till mer flexibla lösningar. För att nå detta mål, bör man övergå från traditionella förprogrammerade robotar till robotar som själva kan lära sig nya färdigheter. Learning from Demonstration (LfD) och Imitation Learning (IL), där roboten lär sig genom att observera en människa eller en annan robot, är bland de mest populära inlärningsteknikerna. Att visa roboten hur den ska utföra en uppgift är ofta mer naturligt och intuitivt än att modifiera ett komplicerat styrprogram. Men att lära robotar nya färdigheter så att de kan reproducera dem under nya yttre förhållanden, på rätt tid och på ett lämpligt sätt, kräver god förståelse för alla utmaningar inom området. Studier av LfD och IL hos människor och djur visar att flera kognitiva förmågor är inblandade för att lära sig nya färdigheter på rätt sätt. De mest anmärkningsvärda är förmågan att rikta uppmärksamheten på de relevanta aspekterna i en demonstration, och förmågan att anpassa observerade rörelser till robotens egen kropp. Dessutom är det viktigt att ha en klar förståelse av lärarens avsikter, och att ha förmågan att kunna generalisera dem till nya situationer. När en inlärningsfas är slutförd kan stimuli trigga det kognitiva systemet att utföra de nya färdigheter som blivit en del av robotens repertoar. Målet med denna avhandling är att utveckla metoder för LfD som huvudsakligen fokuserar på att förstå lärarens intentioner, och vilka delar av en demonstration som ska ha robotens uppmärksamhet. Den föreslagna arkitekturen innehåller de kognitiva funktioner som behövs för lärande och återgivning av högnivåaspekter av demonstrationer. Flera inlärningsmetoder för att rikta robotens uppmärksamhet och identifiera relevant information föreslås. Arkitekturen integrerar motorkommandon med begrepp, föremål och omgivningens tillstånd för att säkerställa korrekt återgivning av beteenden. Ett annat huvudresultat i denna avhandling rör metoder för att lösa tvetydigheter i demonstrationer, där lärarens intentioner inte är klart uttryckta och flera demonstrationer är nödvändiga för att kunna förutsäga intentioner på ett korrekt sätt. De utvecklade lösningarna är inspirerade av modeller av människors minne, och en primingmekanism används för att ge roboten ledtrådar som kan öka sannolikheten för att intentioner förutsägs på ett korrekt sätt. De utvecklade teknikerna har, i tillägg till robotinlärning, använts i ett halvautomatiskt system (shared control) baserat på visuellt guidade beteenden och primingmekanismer. Arkitekturen och inlärningsteknikerna tillämpas och utvärderas i flera verkliga scenarion som kräver en tydlig förståelse av mänskliga intentioner i demonstrationerna. Slutligen jämförs de utvecklade inlärningsmetoderna, och deras applicerbarhet under olika förhållanden diskuteras.
INTRO
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

McBean, John M. (John Michael) 1979. „Design and control of a voice coil actuated robot arm for human-robot interaction“. Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17951.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.
Includes bibliographical references (leaf 68).
The growing field of human-robot interaction (HRI) demands robots that move fluidly, gracefully, compliantly and safely. This thesis describes recent work in the design and evaluation of long-travel voice coil actuators (VCAs) for use in robots intended for interacting with people. The basic advantages and shortcomings of electromagnetic actuators are discussed and evaluated in the context of human-robot interaction, and are compared to alternative actuation technologies. Voice coil actuators have been chosen for their controllability, ease of implementation, geometry, compliance, biomimetic actuation characteristics, safety, quietness, and high power density. Several VCAs were designed, constructed, and tested, and a 4 Degree of Freedom (DOF) robotic arm was built as a test platform for the actuators themselves, and the control systems used to drive them. Several control systems were developed and implemented that, when used with the actuators, enable smooth, fast, life-like motion.
by John M. McBean.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Kuo, I.-Han. „Designing Human-Robot Interaction for service applications“. Thesis, University of Auckland, 2012. http://hdl.handle.net/2292/19438.

Der volle Inhalt der Quelle
Annotation:
As service robots are intended to serve at close range and cater to the needs of human users, Human-Robot Interaction (HRI) has been identified as one of the most difficult and critical challenges in research for the success of service robotics. In particular, HRI requires highly complex software integration to enable a robot to communicate in a manner that is natural and intuitive to the human user. An initial service robot prototype was developed by integrating several existing research projects at the University of Auckland and deployed in a user study. The result showed the need for more HRI abilities to interactively engage the user and perform task-specific interactions. To solve these requirements and deal with relevant issues in software integration, I proposed a design methodology which guides HRI designers from design to implementation. In the methodology, Unified Modelling Language (UML) and an extension, UMLi, were used for modelling a robot's interactive behaviour and communicating the interaction designs within a multidisciplinary group. Notably, new design patterns for HRI were proposed to facilitate communication of necessary cues that a robot needs to perceive, or express during an interaction. The methodology also emphasises an iterative process to discover and design around limitations of existing software technologies. In addition, a component-based development approach was adapted to further help HRI designers in handling the complexity in software integration by modularising the robot's functionalities. As a case study, I applied this methodology to implement a second prototype, Charlie. In a user study with older people (65+) in an aged care facility in New Zealand, the robot was able to detect and recognise a human user in 59 percent of the interactions that happened. Over the two-week period of the study, the robot performed robustly six hours daily and provided assistance in measurement of a range of vital signs. The interaction patterns proposed, were also validated for future reuse. The results indicate the validity of the methodology in developing robust and interactive service applications in real world environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Ponsler, Brett. „Recognizing Engagement Behaviors in Human-Robot Interaction“. Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/109.

Der volle Inhalt der Quelle
Annotation:
Based on analysis of human-human interactions, we have developed an initial model of engagement for human-robot interaction which includes the concept of connection events, consisting of: directed gaze, mutual facial gaze, conversational adjacency pairs, and backchannels. We implemented the model in the open source Robot Operating System and conducted a human-robot interaction experiment to evaluate it.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Holroyd, Aaron. „Generating Engagement Behaviors in Human-Robot Interaction“. Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/328.

Der volle Inhalt der Quelle
Annotation:
Based on a study of the engagement process between humans, I have developed models for four types of connection events involving gesture and speech: directed gaze, mutual facial gaze, adjacency pairs and backchannels. I have developed and validated a reusable Robot Operating System (ROS) module that supports engagement between a human and a humanoid robot by generating appropriate connection events. The module implements policies for adding gaze and pointing gestures to referring phrases (including deictic and anaphoric references), performing end-of-turn gazes, responding to human-initiated connection events and maintaining engagement. The module also provides an abstract interface for receiving information from a collaboration manager using the Behavior Markup Language (BML) and exchanges information with a previously developed engagement recognition module. This thesis also describes a Behavior Markup Language (BML) realizer that has been developed for use in robotic applications. Instead of the existing fixed-timing algorithms used with virtual agents, this realizer uses an event-driven architecture, based on Petri nets, to ensure each behavior is synchronized in the presence of unpredictable variability in robot motor systems. The implementation is robot independent, open-source and uses the Robot Operating System (ROS).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Marín, Urías Luis Felipe. „Reasoning about space for human-robot interaction“. Toulouse 3, 2009. http://thesesups.ups-tlse.fr/1195/.

Der volle Inhalt der Quelle
Annotation:
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière exponentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain". Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes. Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux: - La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer différentes tâches d'interaction. - Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives
Human Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective". In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings. In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks: - Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction. - A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Valibeik, Salman. „Human robot interaction in a crowded environment“. Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/5677.

Der volle Inhalt der Quelle
Annotation:
Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7].
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Bussy, Antoine. „Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde“. Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20090/document.

Der volle Inhalt der Quelle
Annotation:
Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques
Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

CAPELLI, BEATRICE. „Controllo di Sistemi Multi-Robot e Interazione Uomo-Multi-Robot“. Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2022. http://hdl.handle.net/11380/1270798.

Der volle Inhalt der Quelle
Annotation:
I sistemi multi-robot stanno migliorando rapidamente e grazie alle innovazioni tecnologiche diventeranno parte integrante della vita di tutti i giorni. Questo futuro è ormai vicino ed ha bisogno di nuove strategie di controllo per permettere a questi sistemi di svolgere in maniera efficiente le attività a loro assegnate e di interagire con gli operatori. In questa tesi proponiamo nuove metodologie di controllo per portare i sistemi multi-robot un passo avanti verso il loro utilizzo in applicazioni nel mondo reale e affinché diventino di uso comune. Prima di tutto abbiamo introdotto una nuova strategia per il mantenimento della connettività che può efficacemente bilanciare l'obiettivo del sistema con la necessità di avere comunicazione tra gli agenti del sistema stesso. Infatti, la maggior parte dei sistemi multi-robot ha bisogno di comunicazione tra gli agenti per poter funzionare correttamente, quindi un controllore adeguato deve tenere conto di questo aspetto. La metodologia proposta è flessibile sia in termini di tipologie di compiti che possono essere ottenuti, sia di rete di comunicazione, che può cambiare durante l'attività, ma sempre garantendo un sistema connesso. Una delle caratteristiche più utili dei sistemi multi-robot è la loro natura intrinsecamente distribuita, che permette a questi sistemi di monitorare contemporaneamente aree diverse o di misurare molteplici quantità di interesse allo stesso tempo. Questa abilità è spesso impiegata nell'ambito di compiti di coverage, in cui i robot sono dispiegati in ampie aree che non potrebbero essere monitorate, o misurate, efficacemente da un robot singolo o da operatori. Il problema del coverage è ben noto e studiato nell'ambito dei sistemi multi-robot, ma è solitamente affrontato introducendo delle ipotesi importanti. Proponiamo quindi una legge di controllo distribuita che considera le capacità di percezione limitate dei robot e che non si affida ad una rete di comunicazione. Un altro aspetto impegnativo da tenere in considerazione nel dispiegamento dei sistemi multi-robot nel mondo reale è il fatto di poter implementare leggi di controllo stabili. Da questo punto di vista, introduciamo una nuova metodologia che permette ad un sistema multi-robot di compiere il proprio obiettivo in una maniera stabile, andando ad intaccare il meno possibile l'obiettivo stesso. Questo metodo permette di implementare in maniera stabile anche una legge di controllo che potrebbe introdurre instabilità, ed eventualmente anche di poter far fronte alla presenza di ritardi e all'interazione con un ambiente sconosciuto. Infine, è trattato l'argomento dell'interazione tra uomo e sistemi multi-robot. Infatti, per poter utilizzare una moltitudine di robot al fianco degli operatori è necessario investigare come i robot possono comunicare con l'uomo. Ci siamo focalizzati sull'abilità dei sistemi multi-robot di comunicare il proprio intento ad un operatore. Questa abilità è definita come leggibilità. Abbiamo investigato se esiste la possibilità per questi sistemi di comunicare in maniera implicita il loro obiettivo, sia in termini di obiettivi spaziali, sia di obiettivi di coordinazione. Abbiamo inoltre caratterizzato come le diverse caratteristiche delle leggi di controllo possono influenzare la leggibilità.
Multi-robot systems are rapidly improving and thanks to technological innovations they will become part of our everyday life. This near future will need new control strategies for these systems to enable them to efficiently complete their tasks, and to interact with human operators. In this thesis we proposed new control strategies to take multi-robot systems one step forward towards their applications in real world and their common usage. First of all, we introduced a novel strategy for connectivity maintenance that can efficiently mediate between the task of the system and the need of communication among the agents of the system. In fact, most of the time a multi-robot system needs communication among its agents in order to work properly, hence a proper controller should consider this aspect. The proposed methodology is flexible, both in terms of type of tasks that can be achieved and of communication network, which can change during the task, always ensuring connectivity. One of the most useful features of multi-robot systems is their intrinsic distributed nature, which enables them with the ability of simultaneously monitor different areas, or measure different quantities of interest. This ability is usually used in coverage tasks, where the robots are deployed in a large area that could not be monitored, or measured, efficiently by a single robot or by human operators. Coverage is a well-known and well studied problem for multi-robot systems, but it is usually studied under mild assumptions. Hence, we proposed a distributed coverage control law that considers the limited sensing range of robots and does not rely on a communication network. Another challenging aspect of deploying multi-robot systems in real world is to provide them stable control laws. In this aspect, we introduced a novel methodology that enables a multi-robot systems to achieve its task in a stable manner, minimally changing the primary task. This method allows us to implement control law that may lead to instability in a safe manner, and eventually to deal with delays and interaction with unknown environments. Finally, the topic of human-multi-robot interaction is faced. In fact, in order to deploy a multitude of robots along with operators, we need to investigate how robots can exchange information with humans. We focused on the ability of a multi-robot system to communicate its intention to a human operator. This ability is defined as legibility. We studied if these systems can communicate in an implicit manner their objective, both in terms of spatial goal and coordination objective. Furthermore, we characterized how the different characteristic of control laws can affect legibility.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Paléologue, Victor. „Teaching Robots Behaviors Using Spoken Language in Rich and Open Scenarios“. Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS458.

Der volle Inhalt der Quelle
Annotation:
Des robots sociaux tels que Pepper sont déjà présents "dans la nature". Leur comportements sont adaptés à chaque cas d'usage par des experts. Permettre au grand public d'enseigner de nouveaux comportements pourrait mener à une meilleure adaptation à moindre coût. Dans cette thèse nous étudions un système cognitif et des comportements robotiques permettant à des utilisateurs de Pepper à domicile de composer de nouveaux comportements à partir de comportements existants, par le langage parlé. Les domiciles sont des mondes ouverts qui ne peuvent pas être prédéterminés. Pepper doit donc, en plus d'apprendre de nouveaux comportements, être capable de découvrir son environnement, et de s'y rendre utile ou de divertir : c'est un scénario riche. L'enseignement de comportements que nous démontrons s'effectue donc dans ces conditions uniques : par le seul langage parlé, dans des scénarios riches et ouverts, et sur un robot Pepper standard. Grâce à la transcription automatique de la parole et au traitement automatique du langage, notre système reconnaît les enseignements de comportement que nous n'avions pas prédéterminés. Les nouveaux comportements peuvent solliciter des entités qui auraient été appris dans d'autres contextes, pour les accepter et s'en servir comme paramètres. Par des expériences de complexité croissante, nous montrons que des conflits entre les comportements apparaissent dans les scénarios riches, et proposons de les résoudre à l'aide de planification de tâche et de règles de priorités. Nos résultats reposent sur des méthodes qualitatives et quantitatives et soulignent les limitations de notre solution, ainsi que les nouvelles applications qu'elle rend possible
Social robots like Pepper are already found "in the wild". Their behaviors must be adapted for each use case by experts. Enabling the general public to teach new behaviors to robots may lead to better adaptation at lesser cost. In this thesis, we study a cognitive system and a set of robotic behaviors allowing home users of Pepper robots to teach new behaviors as a composition of existing behaviors, using solely the spoken language. Homes are open worlds and are unpredictable. In open scenarios, a home social robot should learn about its environment. The purpose of such a robot is not restricted to learning new behaviors or about the environment: it should provide entertainment or utility, and therefore support rich scenarios. We demonstrate the teaching of behaviors in these unique conditions: the teaching is achieved by the spoken language on Pepper robots deployed in homes, with no extra device and using its standard system, in a rich and open scenario. Using automatic speech transcription and natural language processing, our system recognizes unpredicted teachings of new behaviors, and a explicit requests to perform them. The new behaviors may invoke existing behaviors parametrized with objects learned in other contexts, and may be defined as parametric. Through experiments of growing complexity, we show conflicts between behaviors in rich scenarios, and propose a solution based on symbolic task planning and priorization rules to resolve them. The results rely on qualitative and quantitative analysis and highlight the limitations of our solution, but also the new applications it enables
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Walters, Michael L. „The design space for robot appearance and behaviour for social robot companions“. Thesis, University of Hertfordshire, 2008. http://hdl.handle.net/2299/1806.

Der volle Inhalt der Quelle
Annotation:
To facilitate necessary task-based interactions and to avoid annoying or upsetting people a domestic robot will have to exhibit appropriate non-verbal social behaviour. Most current robots have the ability to sense and control for the distance of people and objects in their vicinity. An understanding of human robot proxemic and associated non-verbal social behaviour is crucial for humans to accept robots as domestic or servants. Therefore, this thesis addressed the following hypothesis: Attributes of robot appearance, behaviour, task context and situation will affect the distances that people will find comfortable between themselves and a robot. Initial exploratory Human-Robot Interaction (HRI) experiments replicated human-human studies into comfortable approach distances with a mechanoid robot in place of one of the human interactors. It was found that most human participants respected the robot's interpersonal space and there were systematic differences for participants' comfortable approach distances to robots with different voice styles. It was proposed that greater initial comfortable approach distances to the robot were due to perceived inconsistencies between the robots overall appearance and voice style. To investigate these issues further it was necessary to develop HRI experimental set-ups, a novel Video-based HRI (VHRI) trial methodology, trial data collection methods and analytical methodologies. An exploratory VHRI trial then investigated human perceptions and preferences for robot appearance and non-verbal social behaviour. The methodological approach highlighted the holistic and embodied nature of robot appearance and behaviour. Findings indicated that people tend to rate a particular behaviour less favourably when the behaviour is not consistent with the robot’s appearance. A live HRI experiment finally confirmed and extended from these previous findings that there were multiple factors which significantly affected participants preferences for robot to human approach distances. There was a significant general tendency for participants to prefer either a tall humanoid robot or a short mechanoid robot and it was suggested that this may be due to participants internal or demographic factors. Participants' preferences for robot height and appearance were both found to have significant effects on their preferences for live robot to Human comfortable approach distances, irrespective of the robot type they actually encountered. The thesis confirms for mechanoid or humanoid robots, results that have previously been found in the domain of human-computer interaction (cf. Reeves & Nass (1996)), that people seem to automatically treat interactive artefacts socially. An original empirical human-robot proxemic framework is proposed in which the experimental findings from the study can be unified in the wider context of human-robot proxemics. This is seen as a necessary first step towards the desired end goal of creating and implementing a working robot proxemic system which can allow the robot to: a) exhibit socially acceptable social spatial behaviour when interacting with humans, b) interpret and gain additional valuable insight into a range of HRI situations from the relative proxemic behaviour of humans in the immediate area. Future work concludes the thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Beer, Jenay M. „Understanding older adults' perceptions of usefulness of an assistive home robot“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50404.

Der volle Inhalt der Quelle
Annotation:
Developing robots that are useful to older adults is more than simply creating robots that complete household tasks. To ensure that older adults perceive a robot to be useful, careful consideration of the users’ capabilities, robot autonomy, and task is needed (Venkatesh & Davis, 2000). The purpose of this study was to investigate the construct of perceived usefulness within the context of robot assistance. Mobile older adults (N = 12) and older adults with mobility loss (N=12) participated in an autonomy selection think aloud task, and a persona based interview. Findings suggest that older adults with mobility loss preferred an autonomy level where they command/control the robot themselves. Mobile older adults’ preferences were split between commanding/controlling the robot themselves, or the robot commands/controls itself. Reasons for their preferences were related to decision making, and were task specific. Additionally, findings from the persona base interview study support Technology Acceptance Model (TAM) constructs, as well as adaptability, reliability, and trust as positively correlated with perceptions of usefulness. However, despite the positive correlation, barriers and facilitators of acceptance identified in the interview suggest that perceived usefulness judgments are complex, and some questionnaire constructs were interpreted differently between participants. Thus, care should be taken when applying TAM constructs to other domains, such as robot assistance to promote older adult independence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Montreuil, Vincent. „Interaction décisionnelle homme-robot : la planification de tâches au service de la sociabilité du robot“. Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00401050.

Der volle Inhalt der Quelle
Annotation:
Cette thèse aborde la problématique du robot assistant et plus particulièrement les aspects décisionnels qui y sont liés. Un robot assistant est amené à interargir avec des hommes ce qui impose qu'il doit intègrer dans son processus décisionnel de haut-niveau les contraintes sociales inhérentes à un comportement acceptable par son(ses) partenaire(s) humain(s). Cette thèse propose une approche permettant de décrire de manière générique diverses règles sociales qui sont introduites dans le processus de planification du robot afin d'évaluer la qualité sociale des plans solutions et de ne retenir que le(s) plus approprié(s). Cette thèse décrit également l'implémentation de cette approche sous la forme d'un planificateur de tâches appelé HATP (Human Aware Task Planner en anglais). Enfin, cette thèse propose une validation de l'approche développée grâce à un scénario de simulation et à une mise en oeuvre sur un robot réel.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Jin, Emelie, und Ella Johnston. „Question generation for language café interaction between robot and human : NLP applied in a robot“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259555.

Der volle Inhalt der Quelle
Annotation:
Conversational robots can be used in several contexts, one of them being language cafés. This setting demands that the robot in question can converse on many different subjects and move between them smoothly. For this to be possible one needs to generate many questions on a range of subjects and find a way of moving from one subject to another. This work aims to do this by generating questions using a template framework and navigating between them using clustering, a scalable solution which is adapted to language café settings. The general value of language cafés and their role in a language learning process is also discussed.
Konversationsrobotar kan användas i många olika miljöer, där en av dem är språkcaféer. Denna miljö ställer krav på roboten att den kan föra ett samtal kring många olika ämnen och även byta ämne smidigt. För att kunna göra detta behöver man generera många frågor som handlar om många olika ämnen och även hitta ett sätt att byta från ett ämnne till ett annat. Detta arbete ämnar göra detta genom att använda ett mall-ramverk för att generera frågor och klustring för att navigera mellan dem, en skalbar lösning som är anpassad för språkcafémiljöer. Det generella värdet av språkcaféer och deras roll i en språkinlärningsprocess diskuteras också.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Devin, Sandra. „Decisional issues during human-robot joint action“. Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19921/1/DEVIN_Sandra.pdf.

Der volle Inhalt der Quelle
Annotation:
In the future, robots will become our companions and co-workers. They will gradually appear in our environment, to help elderly or disabled people or to perform repetitive or unsafe tasks. However, we are still far from a real autonomous robot, which would be able to act in a natural, efficient and secure manner with humans. To endow robots with the capacity to act naturally with human, it is important to study, first, how humans act together. Consequently, this manuscript starts with a state of the art on joint action in psychology and philosophy before presenting the implementation of the principles gained from this study to human-robot joint action. We will then describe the supervision module for human-robot interaction developed during the thesis. Part of the work presented in this manuscript concerns the management of what we call a shared plan. Here, a shared plan is a a partially ordered set of actions to be performed by humans and/or the robot for the purpose of achieving a given goal. First, we present how the robot estimates the beliefs of its humans partners concerning the shared plan (called mental states) and how it takes these mental states into account during shared plan execution. It allows it to be able to communicate in a clever way about the potential divergent beliefs between the robot and the humans knowledge. Second, we present the abstraction of the shared plans and the postponing of some decisions. Indeed, in previous works, the robot took all decisions at planning time (who should perform which action, which object to use…) which could be perceived as unnatural by the human during execution as it imposes a solution preferentially to any other. This work allows us to endow the robot with the capacity to identify which decisions can be postponed to execution time and to take the right decision according to the human behavior in order to get a fluent and natural robot behavior. The complete system of shared plans management has been evaluated in simulation and with real robots in the context of a user study. Thereafter, we present our work concerning the non-verbal communication needed for human-robot joint action. This work is here focused on how to manage the robot head, which allows to transmit information concerning what the robot's activity and what it understands of the human actions, as well as coordination signals. Finally, we present how to mix planning and learning in order to allow the robot to be more efficient during its decision process. The idea, inspired from neuroscience studies, is to limit the use of planning (which is adapted to the human-aware context but costly) by letting the learning module made the choices when the robot is in a "known" situation. The first obtained results demonstrate the potential interest of the proposed solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Ameri, Ekhtiarabadi Afshin. „Unified Incremental Multimodal Interface for Human-Robot Interaction“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.

Der volle Inhalt der Quelle
Annotation:
Face-to-face human communication is a multimodal and incremental process. Humans employ  different information channels (modalities) for their communication. Since some of these modalities are more error-prone to specic type of data, a multimodal communication can benefit from strengths of each modality and therefore reduce ambiguities during the interaction. Such interfaces can be applied to intelligent robots who operate in close relation with humans. With this approach, robots can communicate with their human colleagues in the same way they communicate with each other, thus leading to an easier and more robust human-robot interaction (HRI).In this work we suggest a new method for implementing multimodal interfaces in HRI domain and present the method employed on an industrial robot. We show that operating the system is made easier by using this interface.
Robot Colleague
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Benkaouar, johal Wafa. „Companion Robots Behaving with Style : Towards Plasticity in Social Human-Robot Interaction“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM082/document.

Der volle Inhalt der Quelle
Annotation:
De nos jours, les robots compagnons présentent de réelles capacités et fonctionnalités. Leurs acceptabilité dans nos habitats est cependant toujours un objet d'étude du fait que les motivations et la valeur du companionage entre robot est enfant n'a pas encore été établi. Classiquement, les robots sociaux avaient des comportements génériques qui ne prenaient pas en compte les différences inter-individuelles. De plus en plus de travaux en Interaction Humain-Robot se penchent sur la personnalisation du compagnon. Personnalisation et contrôle du compagnon permettrai une meilleure compréhension de ses comportements par l'utilisateur. Proposer une palette d'expressions du compagnon jouant un rôle social permettrait à l'utilisateur de customiser leur compagnon en fonction de leur préférences.Dans ce travail, nous proposons un système de plasticité pour l'interaction humain-robot. Nous utilisons une méthode de Design Basée Scenario pour expliciter les rôles sociaux attendu des robot compagnons. Puis en nous appuyant sur la littérature de plusieurs disciplines, nous proposons de représenter ces variations de comportement d'un robot compagnon par les styles comportementaux. Les styles comportementaux sont défini en fonction du rôle social grâce à des paramètres d'expressivité non-verbaux. Ces paramètres (statiques, dynamiques et décorateurs) permettent de transformer des mouvements dit neutres en mouvements stylés. Nous avons mener une étude basée sur des vidéos, qui montraient deux robots avec des mouvement stylés, afin d'évaluer l'expressivité de deux styles parentaux par deux types de robots. Les résultats montrent que les participants étaient capable de différentier les styles en termes de dominance et d'autorité, en accord avec la théorie en psychologie sur ces styles. Nous avons constater que le style préféré par les parents n'étaient pas corréler à leur propre style en tant que parents. En conséquence, les styles comportementaux semblent être des outils pertinents pour la personnalisation social du robot compagnon par les parents.Une seconde expérience, dans un appartement impliquant 16 enfants dans des interaction enfant-robot, a montré que parents et enfants attendent plutôt d'un robot d'être polyvalent et de pouvoir jouer plusieurs rôle à la maison. Cette étude a aussi montré que les styles comportementaux ont une influence sur l'attitude corporelle des enfants pendant l'interaction avec le robot. Des dimensions classiquement utilisées en communication non-verbal nous ont permises de développer des mesures pour l'interaction enfant-robot, basées sur les données capturées avec un capteur Kinect 2.Dans cette thèse nous proposons également la modularisation d'une architecture cognitive et affective précédemment proposé résultant dans l'architecture Cognitive et Affective orientées Interaction (CAIO) pour l'interaction social humain-robot. Cette architecture a été implémenter en ROS, permettant son utilisation par des robots sociaux. Nous proposons aussi l'implémentation des Stimulus Evaluation Checks (SECs) de [Scherer, 2009] pour deux plateformes robotiques permettant l'expression dynamique d'émotion.Nous pensons que les styles comportementaux et l'architecture CAIO pourront s'avérer utile pour l'amélioration de l'acceptabilité et la sociabilité des robot compagnons
Companion robots are technologically and functionally more and more efficient. Capacities and usefulness of companion robots is nowadays a reality. These robots that have now more efficient are however not accepted yet in home environments as worth of having such robot and companionship hasn't been establish. Classically, social robots were displaying generic social behaviours and not taking into account inter-individual differences. More and more work in Human-Robot Interaction goes towards personalisation of the companion. Personalisation and control of the companion could lead to better understanding of the robot's behaviour. Proposing several ways of expression for companion robots playing role would allow user to customize their companion to their social preferences.In this work, we propose a plasticity framework for Human-Robot Interaction. We used a Scenario-Based Design method to elicit social roles for companion robots. Then, based on the literature in several disciplines, we propose to depict variations of behaviour of the companion robot with behavioural styles. Behavioural styles are defined according to the social role with non-verbal expressive parameters. The expressive parameters (static, dynamic and decorators) allow to transform neutral motions into styled motion. We conducted a perceptual study through a video-based survey showing two robots displaying styles allowing us to evaluate the expressibility of two parenting behavioural styles by two kind robots. We found that, participants were indeed able to discriminate between the styles in term of dominance and authoritativeness, which is in line with the psychological theory on these styles. Most important, we found that styles preferred by parents for their children was not correlated to their own parental practice. Consequently, behavioural styles are relevant cues for social personalisation of the companion robot by parents.A second experimental study in a natural environment involving child-robot interaction with 16 children showed that parents and children were expected a versatile robot able to play several social role. This study also showed that behavioural styles had an influence on the child's bodily attitudes during the interaction. Common dimension studied in non-verbal communication allowed us to develop measures for child-robot interaction, based on data captured with a Kinect2 sensor .In this thesis, we also propose a modularisation of a previously proposed affective and cognitive architecture resulting in the new Cognitive, Affective Interaction Oriented (CAIO) architecture. This architecture has been implemented in ROS framework allowing it to use it on social robots. We also proposed instantiations of the Stimulus Evaluation Checks of [Scherer, 2009]for two robotic platforms allowing dynamic expression of emotions.Both behavioural style framework and CAIO architecture can be useful in socialise companion robots and improving their acceptability
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Najar, Anis. „Shaping robot behaviour with unlabeled human instructions“. Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066152.

Der volle Inhalt der Quelle
Annotation:
La plupart des systèmes d'apprentissage interactifs actuels s'appuient sur des protocoles prédéfinis qui peuvent être contraignants pour l'utilisateur. Cette thèse aborde le problème de l'interprétation des instructions, afin de relâcher la contrainte de prédéterminer leurs significations. Nous proposons un système permettant à un humain de guider l'apprentissage d'un robot, à travers des instructions non labellisées. Notre approche consiste à ancrer la signification des signaux instructifs dans le processus d'apprentissage de la tâche et à les utiliser simultanément pour guider l'apprentissage. Cette approche offre plus de liberté à l'humain dans le choix des signaux qu'il peut utiliser, et permet de réduire les efforts d'ingénierie en supprimant la nécessité d'encoder la signification de chaque signal instructif.Nous implémentons notre système sous la forme d'une architecture modulaire, appelée TICS, qui permet de combiner différentes sources d'information: une fonction de récompense, du feedback évaluatif et des instructions non labellisées. Cela offre une plus grande souplesse dans l'apprentissage, en permettant à l'utilisateur de choisir entre différents modes d'apprentissage. Nous proposons plusieurs méthodes pour interpréter les instructions, et une nouvelle méthode pour combiner les feedbacks évaluatifs avec une fonction de récompense prédéfinie.Nous évaluons notre système à travers une série d'expériences, réalisées à la fois en simulation et avec de vrais robots. Les résultats expérimentaux démontrent l'efficacité de notre système pour accélérer le processus d'apprentissage et pour réduire le nombre d'interactions avec l'utilisateur
Most of current interactive learning systems rely on predefined protocols that constrain the interaction with the user. Relaxing the constraints of interaction protocols can therefore improve the usability of these systems.This thesis tackles the question of interpreting human instructions, in order to relax the constraints about predetermining their meanings. We propose a framework that enables a human teacher to shape a robot behaviour, by interactively providing it with unlabeled instructions. Our approach consists in grounding the meaning of instruction signals in the task learning process, and using them simultaneously for guiding the latter. This approach has a two-fold advantage. First, it provides more freedom to the teacher in choosing his preferred signals. Second, it reduces the required engineering efforts, by removing the necessity to encode the meaning of each instruction signal. We implement our framework as a modular architecture, named TICS, that offers the possibility to combine different information sources: a predefined reward function, evaluative feedback and unlabeled instructions. This allows for more flexibility in the teaching process, by enabling the teacher to switch between different learning modes. Particularly, we propose several methods for interpreting instructions, and a new method for combining evaluative feedback with a predefined reward function. We evaluate our framework through a series of experiments, performed both in simulation and with real robots. The experimental results demonstrate the effectiveness of our framework in accelerating the task learning process, and in reducing the number of required interactions with the teacher
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Lirussi, Igor. „Human-Robot interaction with low computational-power humanoids“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19120/.

Der volle Inhalt der Quelle
Annotation:
This article investigates the possibilities of human-humanoid interaction with robots whose computational power is limited. The project has been carried during a year of work at the Computer and Robot Vision Laboratory (VisLab), part of the Institute for Systems and Robotics in Lisbon, Portugal. Communication, the basis of interaction, is simultaneously visual, verbal, and gestural. The robot's algorithm provides users a natural language communication, being able to catch and understand the person’s needs and feelings. The design of the system should, consequently, give it the capability to dialogue with people in a way that makes possible the understanding of their needs. The whole experience, to be natural, is independent from the GUI, used just as an auxiliary instrument. Furthermore, the humanoid can communicate with gestures, touch and visual perceptions and feedbacks. This creates a totally new type of interaction where the robot is not just a machine to use, but a figure to interact and talk with: a social robot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Miners, William Ben. „Toward Understanding Human Expression in Human-Robot Interaction“. Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.

Der volle Inhalt der Quelle
Annotation:
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving.

An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.

Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.

This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.

The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Morvan, Jérémy. „Understanding and communicating intentions in human-robot interaction“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166445.

Der volle Inhalt der Quelle
Annotation:
This thesis is about the collaboration and interaction between a robot and a human agent. The goal is to use the robot as a coworker, by implementing the premises of an interaction system that would make the interaction as natural as possible. This involves that the robot has a vision system that allows understanding of the intentions of the human. This thesis work is intended to be part of a larger project aimed at extending the competences of the programmable industrial robot, Baxter, made by Rethink Robotics. Due to the limited vision abilities of this robot, a Kinect camera is added on the top of its head. This thesis covers human gestures recognition through the Kinect data and robot reactions to these gestures through visual feedback and actions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Busch, Baptiste. „Optimization techniques for an ergonomic human-robot interaction“. Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0027/document.

Der volle Inhalt der Quelle
Annotation:
L’interaction Humain-Robot est un domaine de recherche en pleine expansion parmi la communauté robotique. De par sa nature il réunit des chercheurs venant de domaines variés, tels que psychologie, sociologie et, bien entendu, robotique. Ensemble, ils définissent et dessinent les robots avec lesquels nous interagirons dans notre quotidien.Comme humains et robots commencent à travailler en environnement partagés, la diversité des tâches qu’ils peuvent accomplir augmente drastiquement. Cela créé de nombreux défis et questions qu’il nous faut adresser, en terme de sécurité et d’acceptation des systèmes robotiques.L’être humain a des besoins et attentes bien spécifiques qui ne peuvent être occultés lors de la conception des interactions robotiques. D’une certaine manière, il existe un besoin fort pour l’émergence d’une véritable interaction humain-robot ergonomique.Au cours de cette thèse, nous avons mis en place des méthodes pour inclure des critères ergonomiques et humains dans les algorithmes de prise de décisions, afin d’automatiser le processus de génération d’une interaction ergonomique. Les solutions que nous proposons se basent sur l’utilisation de fonctions de coût encapsulant les besoins humains et permettent d’optimiser les mouvements du robot et le choix des actions. Nous avons ensuite appliqué cette méthode à deux problèmes courants d’interaction humain-robot.Dans un premier temps, nous avons proposé une technique pour améliorer la lisibilité des mouvements du robot afin d’arriver à une meilleure compréhension des ses intentions. Notre approche ne requiert pas de modéliser le concept de lisibilité de mouvements mais pénalise les trajectoires qui amènent à une interprétation erronée ou tardive des intentions du robot durant l’accomplissement d’une tâche partagée. Au cours de plusieurs études utilisateurs nous avons observé un gain substantiel en terme de temps de prédiction et une réduction des erreurs d’interprétation.Puis, nous nous sommes attelés au problème du choix des actions et des mouvements qui vont maximiser l’ergonomie physique du partenaire humain. En utilisant une mesure d’ergonomie des postures humaines, nous simulons les actions et mouvements du robot et de l’humain pour accomplir une tâche donnée, tout en évitant les situations où l’humain serait dans une posture de travail à risque. Les études utilisateurs menées montrent que notre méthode conduit à des postures de travail plus sûr et à une interaction perçue comme étant meilleure
Human-Robot Interaction (HRI) is a growing field in the robotic community. By its very nature it brings together researchers from various domains including psychology, sociology and obviously robotics who are shaping and designing the robots people will interact with ona daily basis. As human and robots starts working in a shared environment, the diversity of tasks theycan accomplish together is rapidly increasing. This creates challenges and raises concerns tobe addressed in terms of safety and acceptance of the robotic systems. Human beings havespecific needs and expectations that have to be taken into account when designing robotic interactions. In a sense, there is a strong need for a truly ergonomic human-robot interaction.In this thesis, we propose methods to include ergonomics and human factors in the motions and decisions planning algorithms, to automatize this process of generating an ergonomicinteraction. The solutions we propose make use of cost functions that encapsulate the humanneeds and enable the optimization of the robot’s motions and choices of actions. We haveapplied our method to two common problems of human-robot interaction.First, we propose a method to increase the legibility of the robot motions to achieve abetter understanding of its intentions. Our approach does not require modeling the conceptof legible motions but penalizes the trajectories that leads to late or mispredictions of therobot’s intentions during a live execution of a shared task. In several user studies we achievesubstantial gains in terms of prediction time and reduced interpretation errors.Second, we tackle the problem of choosing actions and planning motions that maximize thephysical ergonomics on the human side. Using a well-accepted ergonomic evaluation functionof human postures, we simulate the actions and motions of both the human and the robot,to accomplish a specific task, while avoiding situations where the human could be at risk interms of working posture. The conducted user studies show that our method leads to saferworking postures and a better perceived interaction
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Palathingal, Xavier P. „A framework for long-term human-robot interaction /“. abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1446798.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--University of Nevada, Reno, 2007.
"May, 2007." Includes bibliographical references (leaves 44-46). Online version available on the World Wide Web. Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2007]. 1 microfilm reel ; 35 mm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Kapellmann-Zafra, Gabriel. „Human-swarm robot interaction with different awareness constraints“. Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19396/.

Der volle Inhalt der Quelle
Annotation:
Swarm robots are not yet ready to work in real world environments in spaces shared with humans. The real world is unpredictable, complex and dynamic, and swarm systems still are unable to adapt to unexpected situations. However, if humans were able to share their experience and knowledge with these systems, swarm robots could be one step closer to work outside the research labs. To achieve this, research must be done challenging human interaction with more realistic real world environment constraints. This thesis presents a series of studies that explore how human operators with limited situational and/or task awareness interact with swarms of robots. It seeks to inform the development of interaction methodologies and interfaces so that they are better adapted to real world environments. The first study explores how an operator with bird's-eye perspective can guide a swarm of robots when transporting a large object through an environment with obstacles. As an attempt to better emulate some restricted real world environments, in the second study, the operator is restricted from access to the bird's-eye perspective. This restriction limits the operator's situational awareness while they are collaborating with the swarm. Finally, limited task awareness was included as a additional restriction. In this third study, the operator not only has to deal with limited situational awareness but also with limited information regarding the objective. Results show that awareness limitations can have significant negative effects over the operator's performance, yet these effects can be overcome with proper training methods. Through all studies a series of experiments are conducted where operators interact with swarms of either real or simulated robots. In both cases, the development of the interaction interfaces suggest that careful design can support the operator in the process of overcoming awareness problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie