Segui questo link per vedere altri tipi di pubblicazioni sul tema: Human-robot interaction.

Tesi sul tema "Human-robot interaction"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Human-robot interaction".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Kruse, Thibault. "Planning for human robot interaction". Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30059/document.

Testo completo
Abstract (sommario):
Les avancées récentes en robotique inspirent des visions de robots domestiques et de service rendant nos vies plus faciles et plus confortables. De tels robots pourront exécuter différentes tâches de manipulation d'objets nécessaires pour des travaux de ménage, de façon autonome ou en coopération avec des humains. Dans ce rôle de compagnon humain, le robot doit répondre à de nombreuses exigences additionnelles comparées aux domaines bien établis de la robotique industrielle. Le but de la planification pour les robots est de parvenir à élaborer un comportement visant à satisfaire un but et qui obtient des résultats désirés et dans de bonnes conditions d'efficacité. Mais dans l'interaction homme-robot (HRI), le comportement robot ne peut pas simplement être jugé en termes de résultats corrects, mais il doit être agréable aux acteurs humains. Cela signifie que le comportement du robot doit obéir à des critères de qualité supplémentaire. Il doit être sûr, confortable pour l'homme, et être intuitivement compris. Il existe des pratiques pour assurer la sécurité et offrir un confort en gardant des distances suffisantes entre le robot et des personnes à proximité. Toutefois fournir un comportement qui est intuitivement compris reste un défi. Ce défi augmente considérablement dans les situations d'interaction homme-robot dynamique, où les actions de la personne sont imprévisibles, le robot devant adapter en permanence ses plans aux changements. Cette thèse propose une approche nouvelle et des méthodes pour améliorer la lisibilité du comportement du robot dans des situations dynamiques. Cette approche ne considère pas seulement la qualité d'un seul plan, mais le comportement du robot qui est parfois le résultat de replanifications répétées au cours d'une interaction. Pour ce qui concerne les tâches de navigation, cette thèse présente des fonctions de coûts directionnels qui évitent les problèmes dans des situations de conflit. Pour la planification d'action en général, cette thèse propose une approche de replanification locale des actions de transport basé sur les coûts de navigation, pour élaborer un comportement opportuniste adaptatif. Les deux approches, complémentaires, facilitent la compréhension, par les acteurs et observateurs humains, des intentions du robot et permettent de réduire leur confusion
The recent advances in robotics inspire visions of household and service robots making our lives easier and more comfortable. Such robots will be able to perform several object manipulation tasks required for household chores, autonomously or in cooperation with humans. In that role of human companion, the robot has to satisfy many additional requirements compared to well established fields of industrial robotics. The purpose of planning for robots is to achieve robot behavior that is goal-directed and establishes correct results. But in human-robot-interaction, robot behavior cannot merely be judged in terms of correct results, but must be agree-able to human stakeholders. This means that the robot behavior must suffice additional quality criteria. It must be safe, comfortable to human, and intuitively be understood. There are established practices to ensure safety and provide comfort by keeping sufficient distances between the robot and nearby persons. However providing behavior that is intuitively understood remains a challenge. This challenge greatly increases in cases of dynamic human-robot interactions, where the actions of the human in the future are unpredictable, and the robot needs to constantly adapt its plans to changes. This thesis provides novel approaches to improve the legibility of robot behavior in such dynamic situations. Key to that approach is not to merely consider the quality of a single plan, but the behavior of the robot as a result of replanning multiple times during an interaction. For navigation planning, this thesis introduces directional cost functions that avoid problems in conflict situations. For action planning, this thesis provides the approach of local replanning of transport actions based on navigational costs, to provide opportunistic behavior. Both measures help human observers understand the robot's beliefs and intentions during interactions and reduce confusion
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Bodiroža, Saša. "Gestures in human-robot interaction". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17705.

Testo completo
Abstract (sommario):
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt.
Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Miners, William Ben. "Toward Understanding Human Expression in Human-Robot Interaction". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.

Testo completo
Abstract (sommario):
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving.

An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.

Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.

This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.

The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Akan, Batu. "Human Robot Interaction Solutions for Intuitive Industrial Robot Programming". Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14315.

Testo completo
Abstract (sommario):
Over the past few decades the use of industrial robots has increased the efficiency as well as competitiveness of many companies. Despite this fact, in many cases, robot automation investments are considered to be technically challenging. In addition, for most small and medium sized enterprises (SME) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods for industrial robots are too complex for an inexperienced robot programmer, thus assistance from a robot programming expert is often needed.  We hypothesize that in order to make industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis we propose a high-level natural language framework for interacting with industrial robots through an instructional programming environment for the user.  The ultimate goal of this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.In this thesis we mainly address two issues. The first issue is to make interaction with a robot easier and more natural through a multimodal framework. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than programming issues of the robot. This approach shifts the focus of industrial robot programming from the coordinate based programming paradigm, which currently dominates the field, to an object based programming scheme.The second issue addressed is a general framework for implementing multimodal interfaces. There have been numerous efforts to implement multimodal interfaces for computers and robots, but there is no general standard framework for developing them. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline and includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.
robot colleague project
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Topp, Elin Anna. "Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping". Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Huang, Chien-Ming. "Joint attention in human-robot interaction". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.

Testo completo
Abstract (sommario):
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bremner, Paul. "Conversational gestures in human-robot interaction". Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557106.

Testo completo
Abstract (sommario):
Humanoid service robotics is a rapidly developing field of research. One desired purpose of such service robots is for them to be able to interact and cooperate with people. In order for them to be able to do so successfully they need to be able to communicate effectively. One way of achieving this is for humanoid robots to communicate in a human-like way resulting in easier, more familiar and ultimately more successful human-robot interaction. An integral part of human communications is co-verbal gesture; thus, investigation into a means of their production and whether they engender the desired effects is undertaken in this thesis. In order for gestures to be able to be produced using BERTI (Bristol and Elumotion Robotic Torso I), the robot designed and built for this work, a means of coordinating the joints to produce the required hand motions was necessary. A relatively simple method for doing so is proposed which produces motion that shares characteristics with proposed mathematical models for human arm movements, i.e., smooth and direct motion. It was then investigated whether, as hypothesised, gestures produced using this method were recognisable and positively perceived by users. A series of user studies showed that the gestures were indeed as recognisable as their human counterparts, and positively perceived. In order to enable users to form more confident opinions of the gestures, investigate whether improvements in human-likeness would affect user perceptions, and enable investigation into the affects of robotic gestures on listener behaviour, methods for producing gesture sequences were developed. Sufficient procedural information for gesture production was not present in the anthropological literature, so empirical evidence was sought from monologue performances. This resulted in a novel set of rules for production of beat gestures (a key type of co-verbal gesture), as well as some other important procedural methods; these were used to produce a two minute monologue with accompanying gestures. A user study carried out using this monologue reinforced the previous finding that positively perceived gestures were produced. It also showed that gesture sequences using beat gestures generated using the rules, were not significantly preferable to those containing only naively selected pre-scripted beat gestures. This demonstrated that minor improvements in human-likeness offered no significant benefit in user perception. Gestures have been shown to have positive effects on listener engagement and memory (of the accompanied speech) in anthropological studies. In this thesis the hypothesis that similar effects would be observed when BERTI performed co-verbal gestures was investigated. It was found that there was a highly significant improvement in user engagement, as well as a significant improvement in certainty of data recalled. Thus, some of the expected effects of co-verbal gesture were observed.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Fiore, Michelangelo. "Decision Making in Human-Robot Interaction". Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0049/document.

Testo completo
Abstract (sommario):
Un intérêt croissant est aujourd'hui porté sur les robots capables de conduire des activités de collaboration d'une manière naturelle et efficace. Nous avons développé une architecture et un système qui traitent des aspects décisionnels de ce problème. Nous avons mis en oeuvre cette architecture pour traiter trois problèmes différents: le robot observateur, le robot équipier et enfin le robot instructeur. Dans cette thèse, nous discutons des défis et problématiques de la coopération homme-robot, puis nous décrivons l'architecture que nous avons développée et enfin détaillons sa mise oeuvre et les algorithmiques spécifiques à chacun des scénarios.Dans le cadre du scénario du robot observateur, le robot maintient un état du monde à jour au moyen d'un raisonnement géométrique effectué sur les données de perception, produisant ainsi une description symbolique de l'état du monde et des agents présents. Nous montrons également, sur la base d'un système de raisonnement intégrant des processus de décision de Markov (MDPs) et des réseaux Bayésiens, comment le robot est capable d'inférer les intentions et les actions futures de ses partenaires humain, à partir d'une observation de leurs mouvements relatifs aux objets de l'environnement. Nous identifions deux types de comportements proactifs : corriger les croyances de l'homme en lui fournissant l'information pertinente qui lui permettra de réaliser son but, aider physiquement la personne dans la réalisation de sa tâche, une fois celle-ci identifiée par le robot.Dans le cas du robot équipier, ce dernier doir réaliser une tâche en coopération avec un partenaire human. Nous introduisons un planificateur nommé Human-Aware Task Planner et détaillons la gestion par notre systeme du plan partagé par un composant appelé Plan Management component. Grâce à se système, le robot peut collaborer avec les hommes selon trois modalités différentes : robot leader, human leader, ou equal partners. Nous discutons des fonctions qui permettent au robot de suivre les actions de son partenaire humain et de vérifier qu'elles sont compatibles ou non avec le plan partagé et nous montrons comment le robot est capable de produire des comportements sûrs qui permettent de réaliser la tâche en prenant en compte de manière explicite la présence et les actions de l'homme ainsi que ses préférences. L'approche est fondée sur des processus décisionnels de Markov hiérarchisés avec observabilité mixte et permet d'estimer l'engagement de l'homme et de réagir en conséquence à différents niveaux d'abstraction. Enfin, nous discutions d'une approche prospective fondée sur un planificateur multi-agent probabiliste mettant en œuvre des MDPs et de sa pertinence quant à l'amélioration du composant de gestion de plan partagé.Dans le scénario du robot instructeur, nous détaillons les processus décisionnels qui permettent au robot d'adapter le plan partagé (shared plan) en fonction de l'état de connaissance et des désirs de son partenaire humain. Selon, le cas, le robot donne plus ou moins de détails sur le plan et adapte son comportement aux connaissances de l'homme ; Une étude utilisateur a également été menée permettant de valider la pertinence de cette approche.Finalement, nous présentons la mise en œuvre d'un robot guide autonome et détaillons les processu décisionnels que nous y avons intégrés pour lui permettre de guider des voyageurs dans un hall d'aéroport en s'adaptant au mieux au contexte et aux désirs des personnes guidées. Nous illustrons dans ce contexte des comportement adaptatifs et pro-actifs. Ce système a été effectivement intégré sur le robot Spencer qui a été déployé dans le terminal principal de l'aéroport d'Amsterdam (Schiphol). Le robot a fonctionné de manière robuste et satisfaisante. Une étude utilisateur a permis, dans ce cas également, de mesurer les performances et de valider le système
There has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Alanenpää, Madelene. "Gaze detection in human-robot interaction". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.

Testo completo
Abstract (sommario):
The aim of this thesis is to track gaze direction in a human-robot interaction scenario.The human-robot interaction consisted of a participant playing a geographic gamewith three important objects on which participants could focus: A tablet, a sharedtouchscreen, and a robot (called Furhat). During the game, the participant wasequipped with eye-tracking glasses. These collected a first-person view video as wellas annotations consisting of the participant's center of gaze. In this thesis, I aim to usethis data to detect the three important objects described above from the first-personvideo stream and discriminate whether the gaze of the person fell on one of theobjects of importance and for how long. To achieve this, I trained an accurate and faststate-of-the-art object detector called YOLOv4. To ascertain that this was thecorrect object detector, for this thesis, I compared YOLOv4 with its previousversion, YOLOv3, in terms of accuracy and run time. YOLOv4 was trained with adata set of 337 images consisting of various pictures of tablets, television screens andthe Furhat robot.The trained program was used to extract the relevant objects for each frame of theeye-tracking video and a parser was used to discriminate whether the gaze of theparticipant fell on the relevant objects and for how long. The result is a system thatcould determine, with an accuracy of 90.03%, what object the participant is looking atand for how long the participant is looking at that object.Tryckt av:
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Almeida, Luís Miguel Martins. "Human-robot interaction for object transfer". Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22374.

Testo completo
Abstract (sommario):
Mestrado em Engenharia Mecânica
Robots come into physical contact with humans under a variety of circumstances to perform useful work. This thesis has the ambitious aim of contriving a solution that leads to a simple case of physical human-robot interaction, an object transfer task. Firstly, this work presents a review of the current research within the field of Human-Robot Interaction, where two approaches are distinguished, but simultaneously required: a pre-contact approximation and an interaction by contact. Further, to achieve the proposed objectives, this dissertation addresses a possible answer to three major problems: (1) The robot control to perform the inherent movements of the transfer assignment, (2) the human-robot pre interaction and (3) the interaction by contact. The capabilities of a 3D sensor and force/tactile sensors are explored in order to prepare the robot to handover an object and to control the robot gripper actions, correspondingly. The complete software development is supported by the Robot Operating System (ROS) framework. Finally, some experimental tests are conducted to validate the proposed solutions and to evaluate the system's performance. A possible transfer task is achieved, even if some refinements, improvements and extensions are required to improve the solution performance and range.
Os robôs entram em contacto físico com os humanos sob uma variedade de circunstâncias para realizar trabalho útil. Esta dissertação tem como objetivo o desenvolvimento de uma solução que permita um caso simples de interação física humano-robô, uma tarefa de transferência de objetos. Inicialmente, este trabalho apresenta uma revisão da pesquisa corrente na área da interação humano-robô, onde duas abordagens são distinguíveis, mas simultaneamente necessárias: uma aproximação pré-contacto e uma interação pós-contacto. Seguindo esta linha de pensamento, para atingir os objetivos propostos, esta dissertação procura dar resposta a três grandes problemas: (1) O controlo do robô para que este desempenhe os movimentos inerentes á tarefa de transferência, (2) a pré-interação humano-robô e (3) a interação por contacto. As capacidades de um sensor 3D e de sensores de força são exploradas com o objetivo de preparar o robô para a transferência e de controlar as ações da garra robótica, correspondentemente. O desenvolvimento de arquitetura software é suportado pela estrutura Robot Operating System (ROS). Finalmente, alguns testes experimentais são realizados para validar as soluções propostas e para avaliar o desempenho do sistema. Uma possível transferência de objetos é alcançada, mesmo que sejam necessários alguns refinamentos, melhorias e extensões para melhorar o desempenho e abrangência da solução.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Kaupp, Tobias. "Probabilistic Human-Robot Information Fusion". Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/2554.

Testo completo
Abstract (sommario):
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Kaupp, Tobias. "Probabilistic Human-Robot Information Fusion". University of Sydney, 2008. http://hdl.handle.net/2123/2554.

Testo completo
Abstract (sommario):
PhD
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Ali, Muhammad. "Contribution to decisional human-robot interaction: towards collaborative robot companions". Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00719684.

Testo completo
Abstract (sommario):
L'interaction homme-robot arrive dans une phase intéressante ou la relation entre un homme et un robot est envisage comme 'un partenariat plutôt que comme une simple relation maitre-esclave. Pour que cela devienne une réalité, le robot a besoin de comprendre le comportement humain. Il ne lui suffit pas de réagir de manière appropriée, il lui faut également être socialement proactif. Pour que ce comportement puis être mise en pratique le roboticien doit s'inspirer de la littérature déjà riche en sciences sociocognitives chez l'homme. Dans ce travail, nous allons identifier les éléments clés d'une telle interaction dans le contexte d'une tâche commune, avec un accent particulier sur la façon dont l'homme doit collaborer pour réaliser avec succès une action commune. Nous allons montrer l'application de ces éléments au cas un système robotique afin d'enrichir les interactions sociales homme-robot pour la prise de décision. A cet égard, une contribution a la gestion du but de haut niveau de robot et le comportement proactif est montre. La description d'un modèle décisionnel d'collaboration pour une tâche collaboratif avec l'humain est donnée. Ainsi, l'étude de l'interaction homme robot montre l'intéret de bien choisir le moment d'une action de communication lors des activités conjointes avec l'humain.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Ali, Muhammad. "Contributions to decisional human-robot interaction : towards collaborative robot companions". Thesis, Toulouse, INSA, 2012. http://www.theses.fr/2012ISAT0003/document.

Testo completo
Abstract (sommario):
L'interaction homme-robot arrive dans une phase intéressante ou la relation entre un homme et un robot est envisage comme 'un partenariat plutôt que comme une simple relation maitre-esclave. Pour que cela devienne une réalité, le robot a besoin de comprendre le comportement humain. Il ne lui suffit pas de réagir de manière appropriée, il lui faut également être socialement proactif. Pour que ce comportement puis être mise en pratique le roboticien doit s'inspirer de la littérature déjà riche en sciences sociocognitives chez l'homme. Dans ce travail, nous allons identifier les éléments clés d'une telle interaction dans le contexte d'une tâche commune, avec un accent particulier sur la façon dont l'homme doit collaborer pour réaliser avec succès une action commune. Nous allons montrer l'application de ces éléments au cas un système robotique afin d'enrichir les interactions sociales homme-robot pour la prise de décision. A cet égard, une contribution a la gestion du but de haut niveau de robot et le comportement proactif est montre. La description d'un modèle décisionnel d'collaboration pour une tâche collaboratif avec l'humain est donnée. Ainsi, l'étude de l'interaction homme robot montre l'intéret de bien choisir le moment d'une action de communication lors des activités conjointes avec l'humain
Human Robot Interaction is entering into the interesting phase where the relationship with a robot is envisioned more as one of companionship with the human partner than a mere master-slave relationship. For this to become a reality, the robot needs to understand human behavior and not only react appropriately but also be socially proactive. A Companion Robot will also need to collaborate with the human in his daily life and will require a reasoning mechanism to manage thecollaboration and also handle the uncertainty in the human intention to engage and collaborate. In this work, we will identify key elements of such interaction in the context of a collaborative activity, with special focus on how humans successfully collaborate to achieve a joint action. We will show application of these elements in a robotic system to enrich its social human robot interaction aspect of decision making. In this respect, we provide a contribution to managing robot high-level goals and proactive behavior and a description of a coactivity decision model for collaborative human robot task. Also, a HRI user study demonstrates the importance of timing a verbal communication in a proactive human robot joint action
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Mazhar, Osama. "Vision-based human gestures recognition for human-robot interaction". Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS044.

Testo completo
Abstract (sommario):
Dans la perspective des usines du futur, pour garantir une interaction productive, sure et efficace entre l’homme et le robot, il est impératif que le robot puisse interpréter l’information fournie par le collaborateur humain. Pour traiter cette problématique nous avons exploré des solutions basées sur l’apprentissage profond et avons développé un framework pour la détection de gestes humains. Le framework proposé permet une détection robuste des gestes statiques de la main et des gestes dynamiques de la partie supérieure du corps.Pour la détection des gestes statiques de la main, openpose est associé à la caméra Kinect V2 afin d’obtenir un pseudo-squelette humain en 3D. Avec la participation de 10 volontaires, nous avons constitué une base de données d’images, opensign, qui comprend les images RGB et de profondeur de la Kinect V2 correspondant à 10 gestes alphanumériques statiques de la main, issus de l’American Sign Language. Un réseau de neurones convolutifs de type « Inception V3 » est adapté et entrainé à détecter des gestes statiques de la main en temps réel.Ce framework de détection des gestes est ensuite étendu pour permettre la reconnaissance des gestes dynamiques. Nous avons proposé une stratégie de détection de gestes dynamiques basée sur un mécanisme d’attention spatiale. Celle-ci utilise un réseau profond de type « Convolutional Neural Network - Long Short-Term Memory » pour l’extraction des dépendances spatio-temporelles dans des séquences vidéo pur RGB. Les blocs de construction du réseau de neurones convolutifs sont pré-entrainés sur notre base de données opensign de gestes statiques de la main, ce qui permet une extraction efficace des caractéristiques de la main. Un module d’attention spatiale exploite la posture 2D de la partie supérieure du corps pour estimer, d’une part, la distance entre la personne et le capteur pour la normalisation de l’échelle et d’autre part, les paramètres des cadres délimitant les mains du sujet sans avoir recourt à un capteur de profondeur. Ainsi, le module d’attention spatiale se focalise sur les grands mouvements des membres supérieurs mais également sur les images des mains, afin de traiter les petits mouvements de la main et des doigts pour mieux distinguer les classes de gestes. Les informations extraites d’une caméra de profondeur sont acquises de la base de données opensign. Par conséquent, la stratégie proposée pour la reconnaissance des gestes peut être adoptée par tout système muni d’une caméra de profondeur.Ensuite, nous explorons brièvement les stratégies d’estimation de postures 3D à l’aide de caméras monoculaires. Nous proposons d’estimer les postures 3D chez l’homme par une approche hybride qui combine les avantages des estimateurs discriminants de postures 2D avec les approches utilisant des modèles génératifs. Notre stratégie optimise une fonction de coût en minimisant l’écart entre la position et l’échelle normalisée de la posture 2D obtenue à l’aide d’openpose, et la projection 2D virtuelle du modèle cinématique du sujet humain.Pour l’interaction homme-robot en temps réel, nous avons développé un système distribué asynchrone afin d’associer notre module de détection de gestes statiques à une librairie consacrée à l’interaction physique homme-robot OpenPHRI. Nous validons la performance de notre framework grâce à une expérimentation de type « apprentissage par démonstration » avec un bras robotique
In the light of factories of the future, to ensure productive, safe and effective interaction between robot and human coworkers, it is imperative that the robot extracts the essential information of the coworker. To address this, deep learning solutions are explored and a reliable human gesture detection framework is developed in this work. Our framework is able to robustly detect static hand gestures plus upper-body dynamic gestures.For static hand gestures detection, openpose is integrated with Kinect V2 to obtain a pseudo-3D human skeleton. With the help of 10 volunteers, we recorded an image dataset opensign, that contains Kinect V2 RGB and depth images of 10 alpha-numeric static hand gestures taken from the American Sign Language. "Inception V3" neural network is adapted and trained to detect static hand gestures in real-time.Subsequently, we extend our gesture detection framework to recognize upper-body dynamic gestures. A spatial attention based dynamic gestures detection strategy is proposed that employs multi-modal "Convolutional Neural Network - Long Short-Term Memory" deep network to extract spatio-temporal dependencies in pure RGB video sequences. The exploited convolutional neural network blocks are pre-trained on our static hand gestures dataset opensign, which allow efficient extraction of hand features. Our spatial attention module focuses on large-scale movements of upper limbs plus on hand images for subtle hand/fingers movements, to efficiently distinguish gestures classes.This module additionally exploits 2D upper-body pose to estimate distance of user from the sensor for scale-normalization plus determine the parameters of hands bounding boxes without a need of depth sensor. The information typically extracted from a depth camera in similar strategies is learned from opensign dataset. Thus the proposed gestures recognition strategy can be implemented on any system with a monocular camera.Afterwards, we briefly explore 3D human pose estimation strategies for monocular cameras. To estimate 3D human pose, a hybrid strategy is proposed which combines the merits of discriminative 2D pose estimators with that of model based generative approaches. Our method optimizes an objective function, that minimizes the discrepancy between position & scale-normalized 2D pose obtained from openpose, and a virtual 2D projection of a kinematic human model.For real-time human-robot interaction, an asynchronous distributed system is developed to integrate our static hand gestures detector module with an open-source physical human-robot interaction library OpenPHRI. We validate performance of the proposed framework through a teach by demonstration experiment with a robotic manipulator
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Jou, Yung-Tsan. "Human-Robot Interactive Control". Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1082060744.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Kuo, I.-Han. "Designing Human-Robot Interaction for service applications". Thesis, University of Auckland, 2012. http://hdl.handle.net/2292/19438.

Testo completo
Abstract (sommario):
As service robots are intended to serve at close range and cater to the needs of human users, Human-Robot Interaction (HRI) has been identified as one of the most difficult and critical challenges in research for the success of service robotics. In particular, HRI requires highly complex software integration to enable a robot to communicate in a manner that is natural and intuitive to the human user. An initial service robot prototype was developed by integrating several existing research projects at the University of Auckland and deployed in a user study. The result showed the need for more HRI abilities to interactively engage the user and perform task-specific interactions. To solve these requirements and deal with relevant issues in software integration, I proposed a design methodology which guides HRI designers from design to implementation. In the methodology, Unified Modelling Language (UML) and an extension, UMLi, were used for modelling a robot's interactive behaviour and communicating the interaction designs within a multidisciplinary group. Notably, new design patterns for HRI were proposed to facilitate communication of necessary cues that a robot needs to perceive, or express during an interaction. The methodology also emphasises an iterative process to discover and design around limitations of existing software technologies. In addition, a component-based development approach was adapted to further help HRI designers in handling the complexity in software integration by modularising the robot's functionalities. As a case study, I applied this methodology to implement a second prototype, Charlie. In a user study with older people (65+) in an aged care facility in New Zealand, the robot was able to detect and recognise a human user in 59 percent of the interactions that happened. Over the two-week period of the study, the robot performed robustly six hours daily and provided assistance in measurement of a range of vital signs. The interaction patterns proposed, were also validated for future reuse. The results indicate the validity of the methodology in developing robust and interactive service applications in real world environments.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Ponsler, Brett. "Recognizing Engagement Behaviors in Human-Robot Interaction". Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/109.

Testo completo
Abstract (sommario):
Based on analysis of human-human interactions, we have developed an initial model of engagement for human-robot interaction which includes the concept of connection events, consisting of: directed gaze, mutual facial gaze, conversational adjacency pairs, and backchannels. We implemented the model in the open source Robot Operating System and conducted a human-robot interaction experiment to evaluate it.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Holroyd, Aaron. "Generating Engagement Behaviors in Human-Robot Interaction". Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/328.

Testo completo
Abstract (sommario):
Based on a study of the engagement process between humans, I have developed models for four types of connection events involving gesture and speech: directed gaze, mutual facial gaze, adjacency pairs and backchannels. I have developed and validated a reusable Robot Operating System (ROS) module that supports engagement between a human and a humanoid robot by generating appropriate connection events. The module implements policies for adding gaze and pointing gestures to referring phrases (including deictic and anaphoric references), performing end-of-turn gazes, responding to human-initiated connection events and maintaining engagement. The module also provides an abstract interface for receiving information from a collaboration manager using the Behavior Markup Language (BML) and exchanges information with a previously developed engagement recognition module. This thesis also describes a Behavior Markup Language (BML) realizer that has been developed for use in robotic applications. Instead of the existing fixed-timing algorithms used with virtual agents, this realizer uses an event-driven architecture, based on Petri nets, to ensure each behavior is synchronized in the presence of unpredictable variability in robot motor systems. The implementation is robot independent, open-source and uses the Robot Operating System (ROS).
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Marín, Urías Luis Felipe. "Reasoning about space for human-robot interaction". Toulouse 3, 2009. http://thesesups.ups-tlse.fr/1195/.

Testo completo
Abstract (sommario):
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière exponentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain". Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes. Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux: - La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer différentes tâches d'interaction. - Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives
Human Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective". In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings. In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks: - Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction. - A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Valibeik, Salman. "Human robot interaction in a crowded environment". Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/5677.

Testo completo
Abstract (sommario):
Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7].
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Ahmed, Muhammad Rehan. "Compliance Control of Robot Manipulator for Safe Physical Human Robot Interaction". Doctoral thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-13986.

Testo completo
Abstract (sommario):
Inspiration from biological systems suggests that robots should demonstrate same level of capabilities that are embedded in biological systems in performing safe and successful interaction with the humans. The major challenge in physical human robot interaction tasks in anthropic environment is the safe sharing of robot work space such that robot will not cause harm or injury to the human under any operating condition. Embedding human like adaptable compliance characteristics into robot manipulators can provide safe physical human robot interaction in constrained motion tasks. In robotics, this property can be achieved by using active, passive and semi active compliant actuation devices. Traditional methods of active and passive compliance lead to complex control systems and complex mechanical design. In this thesis we present compliant robot manipulator system with semi active compliant device having magneto rheological fluid based actuation mechanism. Human like adaptable compliance is achieved by controlling the properties of the magneto rheological fluid inside joint actuator. This method offers high operational accuracy, intrinsic safety and high absorption to impacts. Safety is assured by mechanism design rather than by conventional approach based on advance control. Control schemes for implementing adaptable compliance are implemented in parallel with the robot motion control that brings much simple interaction control strategy compared to other methods. Here we address two main issues: human robot collision safety and robot motion performance.We present existing human robot collision safety standards and evaluate the proposed actuation mechanism on the basis of static and dynamic collision tests. Static collision safety analysis is based on Yamada’s safety criterion and the adaptable compliance control scheme keeps the robot in the safe region of operation. For the dynamic collision safety analysis, Yamada’s impact force criterion and head injury criterion are employed. Experimental results validate the effectiveness of our solution. In addition, the results with head injury criterion showed the need to investigate human bio-mechanics in more details in order to acquire adequate knowledge for estimating the injury severity index for robots interacting with humans. We analyzed the robot motion performance in several physical human robot interaction tasks. Three interaction scenarios are studied to simulate human robot physical contact in direct and inadvertent contact situations. Respective control disciplines for the joint actuators are designed and implemented with much simplified adaptable compliance control scheme. The series of experimental tests in direct and inadvertent contact situations validate our solution of implementing human like adaptable compliance during robot motion and prove the safe interaction with humans in anthropic domains.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Toris, Russell C. "Bringing Human-Robot Interaction Studies Online via the Robot Management System". Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/1058.

Testo completo
Abstract (sommario):
"Human-Robot Interaction (HRI) is a rapidly expanding field of study that focuses on allowing non-roboticist users to naturally and effectively interact with robots. The importance of conducting extensive user studies has become a fundamental component of HRI research; however, due to the nature of robotics research, such studies often become expensive, time consuming, and limited to constrained demographics. This work presents the Robot Management System, a novel framework for bringing robotic experiments to the web. A detailed description of the open source system, an outline of new security measures, and a use case study of the RMS as a means of conducting user studies is presented. Using a series of navigation and manipulation tasks with a PR2 robot, three user study conditions are compared: users that are co-present with the robot, users that are recruited to the university lab but control the robot from a different room, and remote web-based users. The findings show little statistical differences between usability patterns across these groups, further supporting the use of web-based crowdsourcing techniques for certain types of HRI evaluations."
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Nitz, Pettersson Hannes, e Samuel Vikström. "VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.

Testo completo
Abstract (sommario):
The demand for robots to work in environments together with humans is growing. This calls for new requirements on robots systems, such as the need to be perceived as responsive and accurate in human interactions. This thesis explores the possibility of using AI methods to predict the movement of a human and evaluating if that information can assist a robot with human interactions. The AI methods that were used is a Long Short Term Memory(LSTM) network and an artificial neural network(ANN). Both networks were trained on data from a motion capture dataset and on four different prediction times: 1/2, 1/4, 1/8 and a 1/16 second. The evaluation was performed directly on the dataset to determine the prediction error. The neural networks were also evaluated on a robotic arm in a simulated environment, to show if the prediction methods would be suitable for a real-life system. Both methods show promising results when comparing the prediction error. From the simulated system, it could be concluded that with the LSTM prediction the robotic arm would generally precede the actual position. The results indicate that the methods described in this thesis report could be used as a stepping stone for a human-robot interactive system.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Ameri, Ekhtiarabadi Afshin. "Unified Incremental Multimodal Interface for Human-Robot Interaction". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.

Testo completo
Abstract (sommario):
Face-to-face human communication is a multimodal and incremental process. Humans employ  different information channels (modalities) for their communication. Since some of these modalities are more error-prone to specic type of data, a multimodal communication can benefit from strengths of each modality and therefore reduce ambiguities during the interaction. Such interfaces can be applied to intelligent robots who operate in close relation with humans. With this approach, robots can communicate with their human colleagues in the same way they communicate with each other, thus leading to an easier and more robust human-robot interaction (HRI).In this work we suggest a new method for implementing multimodal interfaces in HRI domain and present the method employed on an industrial robot. We show that operating the system is made easier by using this interface.
Robot Colleague
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Vogt, David. "Learning Continuous Human-Robot Interactions from Human-Human Demonstrations". Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-233262.

Testo completo
Abstract (sommario):
In der vorliegenden Dissertation wurde ein datengetriebenes Verfahren zum maschinellen Lernen von Mensch-Roboter Interaktionen auf Basis von Mensch-Mensch Demonstrationen entwickelt. Während einer Trainingsphase werden Bewegungen zweier Interakteure mittels Motion Capture erfasst und in einem Zwei-Personen Interaktionsmodell gelernt. Zur Laufzeit wird das Modell sowohl zur Erkennung von Bewegungen des menschlichen Interaktionspartners als auch zur Generierung angepasster Roboterbewegungen eingesetzt. Die Leistungsfähigkeit des Ansatzes wird in drei komplexen Anwendungen evaluiert, die jeweils kontinuierliche Bewegungskoordination zwischen Mensch und Roboter erfordern. Das Ergebnis der Dissertation ist ein Lernverfahren, das intuitive, zielgerichtete und sichere Kollaboration mit Robotern ermöglicht.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Peltason, Julia [Verfasser]. "Modeling Human-Robot-Interaction based on generic Interaction Patterns / Julia Peltason". Bielefeld : Universitätsbibliothek Bielefeld, 2014. http://d-nb.info/1052650937/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Najar, Anis. "Shaping robot behaviour with unlabeled human instructions". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066152.

Testo completo
Abstract (sommario):
La plupart des systèmes d'apprentissage interactifs actuels s'appuient sur des protocoles prédéfinis qui peuvent être contraignants pour l'utilisateur. Cette thèse aborde le problème de l'interprétation des instructions, afin de relâcher la contrainte de prédéterminer leurs significations. Nous proposons un système permettant à un humain de guider l'apprentissage d'un robot, à travers des instructions non labellisées. Notre approche consiste à ancrer la signification des signaux instructifs dans le processus d'apprentissage de la tâche et à les utiliser simultanément pour guider l'apprentissage. Cette approche offre plus de liberté à l'humain dans le choix des signaux qu'il peut utiliser, et permet de réduire les efforts d'ingénierie en supprimant la nécessité d'encoder la signification de chaque signal instructif.Nous implémentons notre système sous la forme d'une architecture modulaire, appelée TICS, qui permet de combiner différentes sources d'information: une fonction de récompense, du feedback évaluatif et des instructions non labellisées. Cela offre une plus grande souplesse dans l'apprentissage, en permettant à l'utilisateur de choisir entre différents modes d'apprentissage. Nous proposons plusieurs méthodes pour interpréter les instructions, et une nouvelle méthode pour combiner les feedbacks évaluatifs avec une fonction de récompense prédéfinie.Nous évaluons notre système à travers une série d'expériences, réalisées à la fois en simulation et avec de vrais robots. Les résultats expérimentaux démontrent l'efficacité de notre système pour accélérer le processus d'apprentissage et pour réduire le nombre d'interactions avec l'utilisateur
Most of current interactive learning systems rely on predefined protocols that constrain the interaction with the user. Relaxing the constraints of interaction protocols can therefore improve the usability of these systems.This thesis tackles the question of interpreting human instructions, in order to relax the constraints about predetermining their meanings. We propose a framework that enables a human teacher to shape a robot behaviour, by interactively providing it with unlabeled instructions. Our approach consists in grounding the meaning of instruction signals in the task learning process, and using them simultaneously for guiding the latter. This approach has a two-fold advantage. First, it provides more freedom to the teacher in choosing his preferred signals. Second, it reduces the required engineering efforts, by removing the necessity to encode the meaning of each instruction signal. We implement our framework as a modular architecture, named TICS, that offers the possibility to combine different information sources: a predefined reward function, evaluative feedback and unlabeled instructions. This allows for more flexibility in the teaching process, by enabling the teacher to switch between different learning modes. Particularly, we propose several methods for interpreting instructions, and a new method for combining evaluative feedback with a predefined reward function. We evaluate our framework through a series of experiments, performed both in simulation and with real robots. The experimental results demonstrate the effectiveness of our framework in accelerating the task learning process, and in reducing the number of required interactions with the teacher
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Lirussi, Igor. "Human-Robot interaction with low computational-power humanoids". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19120/.

Testo completo
Abstract (sommario):
This article investigates the possibilities of human-humanoid interaction with robots whose computational power is limited. The project has been carried during a year of work at the Computer and Robot Vision Laboratory (VisLab), part of the Institute for Systems and Robotics in Lisbon, Portugal. Communication, the basis of interaction, is simultaneously visual, verbal, and gestural. The robot's algorithm provides users a natural language communication, being able to catch and understand the person’s needs and feelings. The design of the system should, consequently, give it the capability to dialogue with people in a way that makes possible the understanding of their needs. The whole experience, to be natural, is independent from the GUI, used just as an auxiliary instrument. Furthermore, the humanoid can communicate with gestures, touch and visual perceptions and feedbacks. This creates a totally new type of interaction where the robot is not just a machine to use, but a figure to interact and talk with: a social robot.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Morvan, Jérémy. "Understanding and communicating intentions in human-robot interaction". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166445.

Testo completo
Abstract (sommario):
This thesis is about the collaboration and interaction between a robot and a human agent. The goal is to use the robot as a coworker, by implementing the premises of an interaction system that would make the interaction as natural as possible. This involves that the robot has a vision system that allows understanding of the intentions of the human. This thesis work is intended to be part of a larger project aimed at extending the competences of the programmable industrial robot, Baxter, made by Rethink Robotics. Due to the limited vision abilities of this robot, a Kinect camera is added on the top of its head. This thesis covers human gestures recognition through the Kinect data and robot reactions to these gestures through visual feedback and actions.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Busch, Baptiste. "Optimization techniques for an ergonomic human-robot interaction". Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0027/document.

Testo completo
Abstract (sommario):
L’interaction Humain-Robot est un domaine de recherche en pleine expansion parmi la communauté robotique. De par sa nature il réunit des chercheurs venant de domaines variés, tels que psychologie, sociologie et, bien entendu, robotique. Ensemble, ils définissent et dessinent les robots avec lesquels nous interagirons dans notre quotidien.Comme humains et robots commencent à travailler en environnement partagés, la diversité des tâches qu’ils peuvent accomplir augmente drastiquement. Cela créé de nombreux défis et questions qu’il nous faut adresser, en terme de sécurité et d’acceptation des systèmes robotiques.L’être humain a des besoins et attentes bien spécifiques qui ne peuvent être occultés lors de la conception des interactions robotiques. D’une certaine manière, il existe un besoin fort pour l’émergence d’une véritable interaction humain-robot ergonomique.Au cours de cette thèse, nous avons mis en place des méthodes pour inclure des critères ergonomiques et humains dans les algorithmes de prise de décisions, afin d’automatiser le processus de génération d’une interaction ergonomique. Les solutions que nous proposons se basent sur l’utilisation de fonctions de coût encapsulant les besoins humains et permettent d’optimiser les mouvements du robot et le choix des actions. Nous avons ensuite appliqué cette méthode à deux problèmes courants d’interaction humain-robot.Dans un premier temps, nous avons proposé une technique pour améliorer la lisibilité des mouvements du robot afin d’arriver à une meilleure compréhension des ses intentions. Notre approche ne requiert pas de modéliser le concept de lisibilité de mouvements mais pénalise les trajectoires qui amènent à une interprétation erronée ou tardive des intentions du robot durant l’accomplissement d’une tâche partagée. Au cours de plusieurs études utilisateurs nous avons observé un gain substantiel en terme de temps de prédiction et une réduction des erreurs d’interprétation.Puis, nous nous sommes attelés au problème du choix des actions et des mouvements qui vont maximiser l’ergonomie physique du partenaire humain. En utilisant une mesure d’ergonomie des postures humaines, nous simulons les actions et mouvements du robot et de l’humain pour accomplir une tâche donnée, tout en évitant les situations où l’humain serait dans une posture de travail à risque. Les études utilisateurs menées montrent que notre méthode conduit à des postures de travail plus sûr et à une interaction perçue comme étant meilleure
Human-Robot Interaction (HRI) is a growing field in the robotic community. By its very nature it brings together researchers from various domains including psychology, sociology and obviously robotics who are shaping and designing the robots people will interact with ona daily basis. As human and robots starts working in a shared environment, the diversity of tasks theycan accomplish together is rapidly increasing. This creates challenges and raises concerns tobe addressed in terms of safety and acceptance of the robotic systems. Human beings havespecific needs and expectations that have to be taken into account when designing robotic interactions. In a sense, there is a strong need for a truly ergonomic human-robot interaction.In this thesis, we propose methods to include ergonomics and human factors in the motions and decisions planning algorithms, to automatize this process of generating an ergonomicinteraction. The solutions we propose make use of cost functions that encapsulate the humanneeds and enable the optimization of the robot’s motions and choices of actions. We haveapplied our method to two common problems of human-robot interaction.First, we propose a method to increase the legibility of the robot motions to achieve abetter understanding of its intentions. Our approach does not require modeling the conceptof legible motions but penalizes the trajectories that leads to late or mispredictions of therobot’s intentions during a live execution of a shared task. In several user studies we achievesubstantial gains in terms of prediction time and reduced interpretation errors.Second, we tackle the problem of choosing actions and planning motions that maximize thephysical ergonomics on the human side. Using a well-accepted ergonomic evaluation functionof human postures, we simulate the actions and motions of both the human and the robot,to accomplish a specific task, while avoiding situations where the human could be at risk interms of working posture. The conducted user studies show that our method leads to saferworking postures and a better perceived interaction
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Palathingal, Xavier P. "A framework for long-term human-robot interaction /". abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1446798.

Testo completo
Abstract (sommario):
Thesis (M.S.)--University of Nevada, Reno, 2007.
"May, 2007." Includes bibliographical references (leaves 44-46). Online version available on the World Wide Web. Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2007]. 1 microfilm reel ; 35 mm.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Kapellmann-Zafra, Gabriel. "Human-swarm robot interaction with different awareness constraints". Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19396/.

Testo completo
Abstract (sommario):
Swarm robots are not yet ready to work in real world environments in spaces shared with humans. The real world is unpredictable, complex and dynamic, and swarm systems still are unable to adapt to unexpected situations. However, if humans were able to share their experience and knowledge with these systems, swarm robots could be one step closer to work outside the research labs. To achieve this, research must be done challenging human interaction with more realistic real world environment constraints. This thesis presents a series of studies that explore how human operators with limited situational and/or task awareness interact with swarms of robots. It seeks to inform the development of interaction methodologies and interfaces so that they are better adapted to real world environments. The first study explores how an operator with bird's-eye perspective can guide a swarm of robots when transporting a large object through an environment with obstacles. As an attempt to better emulate some restricted real world environments, in the second study, the operator is restricted from access to the bird's-eye perspective. This restriction limits the operator's situational awareness while they are collaborating with the swarm. Finally, limited task awareness was included as a additional restriction. In this third study, the operator not only has to deal with limited situational awareness but also with limited information regarding the objective. Results show that awareness limitations can have significant negative effects over the operator's performance, yet these effects can be overcome with proper training methods. Through all studies a series of experiments are conducted where operators interact with swarms of either real or simulated robots. In both cases, the development of the interaction interfaces suggest that careful design can support the operator in the process of overcoming awareness problems.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Dondrup, Christian. "Human-robot spatial interaction using probabilistic qualitative representations". Thesis, University of Lincoln, 2016. http://eprints.lincoln.ac.uk/28665/.

Testo completo
Abstract (sommario):
Current human-aware navigation approaches use a predominantly metric representation of the interaction which makes them susceptible to changes in the environment. In order to accomplish reliable navigation in ever-changing human populated environments, the presented work aims to abstract from the underlying metric representation by using Qualitative Spatial Relations (QSR), namely the Qualitative Trajectory Calculus (QTC), for Human-Robot Spatial Interaction (HRSI). So far, this form of representing HRSI has been used to analyse different types of interactions online. This work extends this representation to be able to classify the interaction type online using incrementally updated QTC state chains, create a belief about the state of the world, and transform this high-level descriptor into low-level movement commands. By using QSRs the system becomes invariant to change in the environment, which is essential for any form of long-term deployment of a robot, but most importantly also allows the transfer of knowledge between similar encounters in different environments to facilitate interaction learning. To create a robust qualitative representation of the interaction, the essence of the movement of the human in relation to the robot and vice-versa is encoded in two new variants of QTC especially designed for HRSI and evaluated in several user studies. To enable interaction learning and facilitate reasoning, they are employed in a probabilistic framework using Hidden Markov Models (HMMs) for online classiffication and evaluation of their appropriateness for the task of human-aware navigation. In order to create a system for an autonomous robot, a perception pipeline for the detection and tracking of humans in the vicinity of the robot is described which serves as an enabling technology to create incrementally updated QTC state chains in real-time using the robot's sensors. Using this framework, the abstraction and generalisability of the QTC based framework is tested by using data from a different study for the classiffication of automatically generated state chains which shows the benefits of using such a highlevel description language. The detriment of using qualitative states to encode interaction is the severe loss of information that would be necessary to generate behaviour from it. To overcome this issue, so-called Velocity Costmaps are introduced which restrict the sampling space of a reactive local planner to only allow the generation of trajectories that correspond to the desired QTC state. This results in a exible and agile behaviour I generation that is able to produce inherently safe paths. In order to classify the current interaction type online and predict the current state for action selection, the HMMs are evolved into a particle filter especially designed to work with QSRs of any kind. This online belief generation is the basis for a exible action selection process that is based on data acquired using Learning from Demonstration (LfD) to encode human judgement into the used model. Thereby, the generated behaviour is not only sociable but also legible and ensures a high experienced comfort as shown in the experiments conducted. LfD itself is a rather underused approach when it comes to human-aware navigation but is facilitated by the qualitative model and allows exploitation of expert knowledge for model generation. Hence, the presented work bridges the gap between the speed and exibility of a sampling based reactive approach by using the particle filter and fast action selection, and the legibility of deliberative planners by using high-level information based on expert knowledge about the unfolding of an interaction.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Pai, Abhishek. "Distance-Scaled Human-Robot Interaction with Hybrid Cameras". University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872095430977.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Mangera, Ra'eesah. "Gesture recognition with application to human-robot interaction". Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/13732.

Testo completo
Abstract (sommario):
Gestures are a natural form of communication, often transcending language barriers. Recently, much research has been focused on achieving natural human-machine interaction using gestures. This dissertation presents the design of a gestural interface that can be used to control a robot. The system consists of two modes: far-mode and near-mode. In far-mode interaction, upper-body gestures are used to control the motion of a robot. Near-mode interaction uses static hand poses to control a graphical user interface. For upper-body gesture recognition, features are extracted from skeletal data. The extracted features consist of joint angles and relative joint positions and are extracted for each frame of the gesture sequence. A novel key-frame selection algorithm is used to align the gesture sequences temporally. A neural network and hidden Markov model are then used to classify the gestures. The framework was tested on three different datasets, the CMU Military dataset of 3 users, 15 gestures and 10 repetitions per gesture, the VisApp2013 dataset with 28 users, 8 gestures and 1 repetition/gesture and a recorded dataset of 15 users, 10 gestures and 3 repetitions per gesture. The system is shown to achieve a recognition rate of 100% across the three different datasets, using the key-frame selection and a neural network for gesture identification. Static hand-gesture recognition is achieved by first retrieving the 24-DOF hand model. The hand is segmented from the image using both depth and colour information. A novel calibration method is then used to automatically obtain the anthropometric measurements of the user’s hand. The k-curvature algorithm, depth-based and parallel border-based methods are used to detect fingertips in the image. An average detection accuracy of 88% is achieved. A neural network and k-means classifier are then used to classify the static hand gestures. The framework was tested on a dataset of 15 users, 12 gestures and 3 repetitions per gesture. A correct classification rate of 75% is achieved using the neural network. It is shown that the proposed system is robust to changes in skin colour and user hand size.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Chauhan, Aneesh. "Grounding human vocabulary in robot perception through interaction". Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/12841.

Testo completo
Abstract (sommario):
Doutoramento em Engenharia Informática
This thesis addresses the problem of word learning in computational agents. The motivation behind this work lies in the need to support language-based communication between service robots and their human users, as well as grounded reasoning using symbols relevant for the assigned tasks. The research focuses on the problem of grounding human vocabulary in robotic agent’s sensori-motor perception. Words have to be grounded in bodily experiences, which emphasizes the role of appropriate embodiments. On the other hand, language is a cultural product created and acquired through social interactions. This emphasizes the role of society as a source of linguistic input. Taking these aspects into account, an experimental scenario is set up where a human instructor teaches a robotic agent the names of the objects present in a visually shared environment. The agent grounds the names of these objects in visual perception. Word learning is an open-ended problem. Therefore, the learning architecture of the agent will have to be able to acquire words and categories in an openended manner. In this work, four learning architectures were designed that can be used by robotic agents for long-term and open-ended word and category acquisition. The learning methods used in these architectures are designed for incrementally scaling-up to larger sets of words and categories. A novel experimental evaluation methodology, that takes into account the openended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. An extensive set of systematic experiments, in multiple experimental settings, was carried out to thoroughly evaluate the described learning approaches. The results indicate that all approaches were able to incrementally acquire new words and categories. Although some of the approaches could not scale-up to larger vocabularies, one approach was shown to learn up to 293 categories, with potential for learning many more.
Esta tese aborda o problema da aprendizagem de palavras em agentes computacionais. A motivação por trás deste trabalho reside na necessidade de suportar a comunicação baseada em linguagem entre os robôs de serviço e os seus utilizadores humanos, bem como suportar o raciocínio baseado em símbolos que sejam relevantes no contexto das tarefas atribuídas e cujo significado seja definido com base na experiência perceptiva. Mais especificamente, o foco da investigação é o problema de estabelecer o significado das palavras na percepção do robô através da interacção homemrobô. A definição do significado das palavras com base em experiências perceptuais e perceptuo-motoras enfatiza o papel da configuração física e perceptuomotora do robô. Entretanto, a língua é um produto cultural criado e adquirido através de interacções sociais. Isso destaca o papel da sociedade como fonte linguística. Tendo em conta estes aspectos, um cenário experimental foi definido no qual um instrutor humano ensina a um agente robótico os nomes dos objectos presentes num ambiente visualmente partilhado. O agente associa os nomes desses objectos à sua percepção visual desses objectos. A aprendizagem de palavras é um problema sem objectivo pré-estabelecido. Nós adquirimos novas palavras ao longo das nossas vidas. Assim, a arquitectura de aprendizagem do agente deve poder adquirir palavras e categorias de uma forma semelhante. Neste trabalho foram concebidas quatro arquitecturas de aprendizagem que podem ser usadas por agentes robóticos para aprendizagem e aquisição de novas palavras e categorias, incrementalmente. Os métodos de aprendizagem utilizados nestas arquitecturas foram projectados para funcionar de forma incremental, acumulando um conjunto cada vez maior de palavras e categorias. É proposta e aplicada uma nova metodologia da avaliação experimental que leva em conta a natureza aberta e incremental da aprendizagem de palavras. Esta metodologia leva em consideração a constatação de que o vocabulário de um robô será limitado pela sua capacidade de discriminação, a qual, por sua vez, depende dos seus sensores e capacidades perceptuais. Foi realizado um extenso conjunto de experiências sistemáticas em múltiplas situações experimentais, para avaliar cuidadosamente estas abordagens de aprendizagem. Os resultados indicam que todas as abordagens foram capazes de adquirir novas palavras e categorias incrementalmente. Embora em algumas das abordagens não tenha sido possível atingir vocabulários maiores, verificou-se que uma das abordagens conseguiu aprender até 293 categorias, com potencial para aprender muitas mais.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Neranon, Paramin. "Human-robot interaction using a behavioural control strategy". Thesis, University of Newcastle upon Tyne, 2015. http://hdl.handle.net/10443/2780.

Testo completo
Abstract (sommario):
A topical and important aspect of robotics research is in the area of human-robot interaction (HRI), which addresses the issue of cooperation between a human and a robot to allow tasks to be shared in a safe and reliable manner. This thesis focuses on the design and development of an appropriate set of behaviour strategies for human-robot interactive control by first understanding how an equivalent human-human interaction (HHI) can be used to establish a framework for a robotic behaviour-based approach. To achieve the above goal, two preliminary HHI experimental investigations were initiated in this study. The first of which was designed to evaluate the human dynamic response using a one degree-of-freedom (DOF) HHI rectilinear test where the handler passes a compliant object to the receiver along a constrained horizontal path. The human dynamic response while executing the HHI rectilinear task has been investigated using a Box-Behnken design of experiments [Box and Hunter, 1957] and was based on the McRuer crossover model [McRuer et al. 1995]. To mimic a real-world human-human object handover task where the handler is able to pass an object to the receiver in a 3D workspace, a second more substantive one DOF HHI baton handover task has been developed. The HHI object handover tests were designed to understand the dynamic behavioural characteristics of the human participants, in which the handler was required to dexterously pass an object to the receiver in a timely and natural manner. The profiles of interactive forces between the handler and receiver were measured as a function of time, and how they are modulated whilst performing the tasks, was evaluated. Three key parameters were used to identify the physical characteristics of the human participants, including: peak interactive force (fmax), transfer time (Ttrf), and work done (W). These variables were subsequently used to design and develop an appropriate set of force and velocity control strategies for a six DOF Stäubli robot manipulator arm (TX60) working in a human-robot interactive environment. The optimal design of the software and hardware controller implementation for the robot system has been successfully established in keeping with a behaviour-based approach. External force control based on proportional plus integral (PI) and fuzzy logic control (FLC) algorithms were adopted to control the robot end effector velocity and interactive force in real-time. ii The results of interactive experiments with human-to-robot and robot-to-human handover tasks allowed a comparison of the PI and FLC control strategies. It can be concluded that the quantitative measurement of the performance of robot velocity and force control can be considered acceptable for human-robot interaction. These can provide effective performance during the robot-human object handover tasks, where the robot was able to successfully pass the object from/to the human in a safe, reliable and timely manner. However, after careful analysis with regard to human-robot handover test results, the FLC scheme was shown to be superior to PI control by actively compensating for the dynamics in the non-linear system and demonstrated better overall performance and stability. The FLC also shows superior performance in terms of improved sensitivity to small error changes compared to PI control, which is an advantage in establishing effective robot force control. The results of survey responses from the participants were in agreement with the parallel test outcomes, demonstrating significant satisfaction with the overall performance of the human-robot interactive system, as measured by an average rating of 4.06 on a five point scale. In brief, this research has contributed the foundations for long-term research, particularly in the development of an interactive real-time robot-force control system, which enables the robot manipulator arm to cooperate with a human to facilitate the dextrous transfer of objects in a safe and speedy manner.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

GARCIA, C. A. C. "Human-Robot Interaction Strategies for Walker-Assisted Locomotion". Universidade Federal do Espírito Santo, 2015. http://repositorio.ufes.br/handle/10/9725.

Testo completo
Abstract (sommario):
Made available in DSpace on 2018-08-02T00:02:04Z (GMT). No. of bitstreams: 1 tese_8979_[Cifuentes2015]Thesis20160322-161800.pdf: 19912329 bytes, checksum: 99cc718d614e10d2d6cce22fe9e19124 (MD5) Previous issue date: 2015-06-25
Neurological and age-related diseases affect human mobility at different levels causing partial or total loss of such faculty. There is a significant need to improve safe and efficient ambulation of patients with gait impairments. In this context, walkers present important benefits for human mobility, improving balance and reducing the load on their lower limbs. Most importantly, walkers induce the use of patients residual mobility capacities in different environments. In the field of robotic technologies for gait assistance, a new category of walkers has emerged, integrating robotic technology, electronics and mechanics. Such devices are known as robotic walkers, intelligent walkers or smart walkers One of the specific and important common aspects to the field of assistive technologies and rehabilitation robotics is the intrinsic interaction between the human and the robot. In this thesis, the concept of Human-Robot Interaction (HRI) for human locomotion assistance is explored. This interaction is composed of two interdependent components. On the one hand, the key role of a robot in a Physical HRI (pHRI) is the generation of supplementary forces to empower the human locomotion. This involves a net flux of power between both actors. On the other hand, one of the crucial roles of a Cognitive HRI (cHRI) is to make the human aware of the possibilities of the robot while allowing him to maintain control of the robot at all times. This doctoral thesis presents a new multimodal human-robot interface for testing and validating control strategies applied to a robotic walkers for assisting human mobility and gait rehabilitation. This interface extracts navigation intentions from a novel sensor fusion method that combines: (i) a Laser Range Finder (LRF) sensor to estimate the users legs kinematics, (ii) wearable Inertial Measurement Unit (IMU) sensors to capture the human and robot orientations and (iii) force sensors measure the physical interaction between the humans upper limbs and the robotic walker. Two close control loops were developed to naturally adapt the walker position and to perform body weight support strategies. First, a force interaction controller generates velocity outputs to the walker based on the upper-limbs physical interaction. Second, a inverse kinematic controller keeps the walker within a desired position to the human improving such interaction. The proposed control strategies are suitable for natural human-robot interaction as shown during the experimental validation. Moreover, methods for sensor fusion to estimate the control inputs were presented and validated. In the experimental studies, the parameters estimation was precise and unbiased. It also showed repeatability when speed changes and continuous turns were performed.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Rahimi, Nohooji Hamed. "Adaptive Neural Control for Safe Human-Robot Interaction". Thesis, Curtin University, 2017. http://hdl.handle.net/20.500.11937/68285.

Testo completo
Abstract (sommario):
This thesis studies safe human-robot interaction utilizing the neural adaptive control design. First, novel tangent and secant barrier Lyapunov functions are constructed to provide stable position and velocity constrained controls, respectively. Then, neural backpropagation and the concept of the inverse differential Riccati equation are utilized to achieve the impedance adaption control for assistive human-robot interaction, and the optimal robot-environment interaction control, respectively. Finally, adaptive neural assist-as-needed control is developed for assistive robotic rehabilitation.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

PASQUALI, DARIO. "Social Engineering Defense Solutions Through Human-Robot Interaction". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1092333.

Testo completo
Abstract (sommario):
Social Engineering is the science of using social interaction to influence others on taking computer-related actions of attacker’s interest. It is used to steal credentials, money, or people’s identities. After being left unchecked for a long time, social engineering is raising increasing concerns. Despite its social nature, state-of-the-art defense systems mainly focus on engineering factors. They detect technical features specific to the medium employed in the attack (e.g., phishing emails), or they train final users on detecting them. However, the crucial aspects of social engineering are humans, their vulnerabilities, and how attackers leverage them, gaining victims’ compliance. Recent solutions involved victims’ explicit perception and judgment in technical defenses (Humans-as-a-Security-Sensor paradigm). However, humans also communicate implicitly: gaze, heart rate, sweating, body posture, and voice prosody are physiological and behavioral cues that implicitly disclose humans’ cognitive and emotional state. In literature, expert social engineers reported monitoring such cues from the victims continuously to adapt their strategy (e.g., in face-to-face attacks); also, they stressed the importance of controlling them to avoid revealing the attacker’s malicious intentions. This thesis studies how to leverage such behavioral and physiological cues to defend against social engineering. Moreover, it researches humanoid social robots - more precisely the iCub and Furhat robotic platforms - as novel agents in the cybersecurity field. Humans’ trust in robots and their role are still debated: attackers could hijack and control them to perform face-to-face attacks from a safe distance. However, this thesis speculates robots could be helpers, everyday companions able to warn users against social engineering attacks, better than traditional notification vectors could do. Finally, this thesis explores leveraging game-based entertaining human-robot interactions to collect more realistic, less biased data. For this purpose, I performed four studies concerning different aspects of social engineering. Firstly, I studied how the trust between attackers and victims evolves and can be exploited. In a Treasure Hunt game, players had to decide whether trust the hints of iCub. The robot showed four mechanical failures designed to mine its perceived reliability in the game and could provide transparent motivations for them. The study showed how players’ trust in iCub decreased only if they perceived all the faults or the robot explained them; i.e., they perceived the risk of relying on a faulty robot. Then, I researched novel physiological-based methods to unmask malicious social engineers. In a Magic Trick card game, autonomously led by the iCub robot, players lied or told the truth about gaming card descriptions. ICub leveraged an End-to-end deception detection architecture to identify lies based on players’ pupil dilation alone. The architecture enables iCub to learn customized deception patterns, improving the classification over prolonged interactions. In the third study, I focused on victims’ behavioral and physiological reactions during social engineering attacks; and how to evaluate their awareness. Participants played an interactive storytelling game designed to challenge them against social engineering attacks from virtual agents and the humanoid robot iCub. Post-hoc, I trained three Random Forest classifiers to detect whether participants’ perceived the risk and uncertainty of Social Engineering attacks and predict their decisions. Finally, I explored how social humanoid robots should intervene to prevent victims’ compliance with social engineering. In a refined version of the interactive storytelling, the Furhat robot contrasted players’ decisions with different strategies, changing their minds. Preliminary results suggest the robot effectively affected participants’ decisions, motivating further studies toward closing the social engineering defense loop in human-robot interaction. Summing up, this thesis provides evidence that humans’ implicit cues and social robots could help against social engineering; it offers practical defensive solutions and architectures supporting further research in the field and discusses them aiming for concrete applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

ROSELLI, CECILIA. "Vicarious Sense of Agency in Human-Robot Interaction". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1070568.

Testo completo
Abstract (sommario):
Sense of Agency (SoA) is the feeling of having control over one’s actions and outcomes. In humans’ daily life, SoA shapes whether, and how, people feel responsible for their actions, which has profound implications for the organization of human societies. Thus, SoA has received considerable attention in psychology and cognitive neuroscience, which tried to identify the cognitive mechanism underlying the emergence of the individual experience of agency. However, humans are inherently social animals, who are deeply immersed in social contexts with others. Thus, investigations of SoA cannot be limited to understanding the individual experience of agency, as SoA also affects the way people experience others’ actions: this is how SoA becomes “vicarious”. Humans can experience vicarious SoA over another human’s actions and outcomes; however, the mechanisms underlying the emergence of vicarious SoA are still under debate. In this context, focusing on artificial agents may help shed light on the vicarious SoA phenomenon. Specifically, robots are an emerging category of artificial agents, designed to assist humans in a variety of tasks- from elderly care to rescue missions. The present Ph.D. thesis aimed at investigating whether, and under which conditions, robots elicit vicarious SoA in humans in the context of Human-Robot Interaction (HRI). Moreover, we aimed at assessing whether vicarious SoA may serve as an implicit measure of intentionality attribution towards robots. The link between vicarious SoA and intentionality attribution was based on the idea that, in some contexts, humans can perceive robots as intentional agents, and it may “boost” the “vicarious” control that they experience over robot’s actions and outcomes- as well as it happens with other humans. In three studies, we employed the Intentional Binding (IB) paradigm as a reliable measure of implicit SoA. Participants performed an IB task with different types of robots varying in their degree of anthropomorphic features and human-like shape (i.e., the Cozmo robot and the iCub robot). Specifically, our goal was to assess whether the emergence of vicarious SoA in humans was modulated by (1) the possibility to represent robot’s actions using one’s own motor schemes, (2) the attribution of intentionality towards robots, and (3) the human-like shape of the robot. Our results suggested that the interplay of these three factors modulates the emergence of vicarious SoA in HRI. In conclusion, the findings collected in the present thesis contribute to the field of research on the vicarious SoA phenomenon in HRI, providing useful hints to design robots well-tailored to humans’ attitudes and needs.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

LANZA, Francesco. "Human-Robot Teaming Interaction: a Cognitive Architecture Solution". Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/479089.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Beer, Jenay M. "Understanding older adults' perceptions of usefulness of an assistive home robot". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50404.

Testo completo
Abstract (sommario):
Developing robots that are useful to older adults is more than simply creating robots that complete household tasks. To ensure that older adults perceive a robot to be useful, careful consideration of the users’ capabilities, robot autonomy, and task is needed (Venkatesh & Davis, 2000). The purpose of this study was to investigate the construct of perceived usefulness within the context of robot assistance. Mobile older adults (N = 12) and older adults with mobility loss (N=12) participated in an autonomy selection think aloud task, and a persona based interview. Findings suggest that older adults with mobility loss preferred an autonomy level where they command/control the robot themselves. Mobile older adults’ preferences were split between commanding/controlling the robot themselves, or the robot commands/controls itself. Reasons for their preferences were related to decision making, and were task specific. Additionally, findings from the persona base interview study support Technology Acceptance Model (TAM) constructs, as well as adaptability, reliability, and trust as positively correlated with perceptions of usefulness. However, despite the positive correlation, barriers and facilitators of acceptance identified in the interview suggest that perceived usefulness judgments are complex, and some questionnaire constructs were interpreted differently between participants. Thus, care should be taken when applying TAM constructs to other domains, such as robot assistance to promote older adult independence.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Devin, Sandra. "Decisional issues during human-robot joint action". Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19921/1/DEVIN_Sandra.pdf.

Testo completo
Abstract (sommario):
In the future, robots will become our companions and co-workers. They will gradually appear in our environment, to help elderly or disabled people or to perform repetitive or unsafe tasks. However, we are still far from a real autonomous robot, which would be able to act in a natural, efficient and secure manner with humans. To endow robots with the capacity to act naturally with human, it is important to study, first, how humans act together. Consequently, this manuscript starts with a state of the art on joint action in psychology and philosophy before presenting the implementation of the principles gained from this study to human-robot joint action. We will then describe the supervision module for human-robot interaction developed during the thesis. Part of the work presented in this manuscript concerns the management of what we call a shared plan. Here, a shared plan is a a partially ordered set of actions to be performed by humans and/or the robot for the purpose of achieving a given goal. First, we present how the robot estimates the beliefs of its humans partners concerning the shared plan (called mental states) and how it takes these mental states into account during shared plan execution. It allows it to be able to communicate in a clever way about the potential divergent beliefs between the robot and the humans knowledge. Second, we present the abstraction of the shared plans and the postponing of some decisions. Indeed, in previous works, the robot took all decisions at planning time (who should perform which action, which object to use…) which could be perceived as unnatural by the human during execution as it imposes a solution preferentially to any other. This work allows us to endow the robot with the capacity to identify which decisions can be postponed to execution time and to take the right decision according to the human behavior in order to get a fluent and natural robot behavior. The complete system of shared plans management has been evaluated in simulation and with real robots in the context of a user study. Thereafter, we present our work concerning the non-verbal communication needed for human-robot joint action. This work is here focused on how to manage the robot head, which allows to transmit information concerning what the robot's activity and what it understands of the human actions, as well as coordination signals. Finally, we present how to mix planning and learning in order to allow the robot to be more efficient during its decision process. The idea, inspired from neuroscience studies, is to limit the use of planning (which is adapted to the human-aware context but costly) by letting the learning module made the choices when the robot is in a "known" situation. The first obtained results demonstrate the potential interest of the proposed solution.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Burke, Michael Glen. "Fast upper body pose estimation for human-robot interaction". Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/256305.

Testo completo
Abstract (sommario):
This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in human-robot interaction in mind. A novel mixture of Ornstein-Uhlenbeck processes model, trained in a reduced dimensional subspace and designed for analytical tractability, is introduced. This model acts as a collection of mean-reverting random walks that pull towards more commonly observed poses. Pose tracking using this model can be Rao-Blackwellised, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. The model is used within a recursive Bayesian framework to provide reliable estimates of upper body pose when only a subset of body joints can be detected. Model training data can be extended through a retargeting process, and better pose coverage obtained through the use of Poisson disk sampling in the model training stage. Results on a number of test datasets show that the proposed approach provides pose estimation accuracy comparable with the state of the art in real time (30 fps) and can be extended to the multiple user case. As a motivating example, this work also introduces a pantomimic gesture recognition interface. Traditional approaches to gesture recognition for robot control make use of predefined codebooks of gestures, which are mapped directly to the robot behaviours they are intended to elicit. These gesture codewords are typically recognised using algorithms trained on multiple recordings of people performing the predefined gestures. Obtaining these recordings can be expensive and time consuming, and the codebook of gestures may not be particularly intuitive. This thesis presents arguments that pantomimic gestures, which mimic the intended robot behaviours directly, are potentially more intuitive, and proposes a transfer learning approach to recognition, where human hand gestures are mapped to recordings of robot behaviour by extracting temporal and spatial features that are inherently present in both pantomimed actions and robot behaviours. A Bayesian bias compensation scheme is introduced to compensate for potential classification bias in features. Results from a quadrotor behaviour selection problem show that good classification accuracy can be obtained when human hand gestures are recognised using behaviour recordings, and that classification using these behaviour recordings is more robust than using human hand recordings when users are allowed complete freedom over their choice of input gestures.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Puehn, Christian G. "Development of a Low-Cost Social Robot for Personalized Human-Robot Interaction". Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1427889195.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Lasota, Przemyslaw A. (Przemyslaw Andrzej). "Robust human motion prediction for safe and efficient human-robot interaction". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122497.

Testo completo
Abstract (sommario):
Thesis: Ph. D. in Autonomous Systems, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-188).
From robotic co-workers in factories to assistive robots in homes, human-robot interaction (HRI) has the potential to revolutionize a large array of domains by enabling robotic assistance where it was previously not possible. Introducing robots into human-occupied domains, however, requires strong consideration for the safety and efficiency of the interaction. One particularly effective method of supporting safe an efficient human-robot interaction is through the use of human motion prediction. By predicting where a person might reach or walk toward in the upcoming moments, a robot can adjust its motions to proactively resolve motion conflicts and avoid impeding the person's movements. Current approaches to human motion prediction, however, often lack the robustness required for real-world deployment. Many methods are designed for predicting specific types of tasks and motions, and do not necessarily generalize well to other domains.
It is also possible that no single predictor is suitable for predicting motion in a given scenario, and that multiple predictors are needed. Due to these drawbacks, without expert knowledge in the field of human motion prediction, it is difficult to deploy prediction on real robotic systems. Another key limitation of current human motion prediction approaches lies in deficiencies in partial trajectory alignment. Alignment of partially executed motions to a representative trajectory for a motion is a key enabling technology for many goal-based prediction methods. Current approaches of partial trajectory alignment, however, do not provide satisfactory alignments for many real-world trajectories. Specifically, due to reliance on Euclidean distance metrics, overlapping trajectory regions and temporary stops lead to large alignment errors.
In this thesis, I introduce two frameworks designed to improve the robustness of human motion prediction in order to facilitate its use for safe and efficient human-robot interaction. First, I introduce the Multiple-Predictor System (MPS), a datadriven approach that uses given task and motion data in order to synthesize a high performing predictor by automatically identifying informative prediction features and combining the strengths of complementary prediction methods. With the use of three distinct human motion datasets, I show that using the MPS leads to lower prediction error in a variety of HRI scenarios, and allows for accurate prediction for a range of time horizons. Second, in order to address the drawbacks of prior alignment techniques, I introduce the Bayesian ESTimator for Partial Trajectory Alignment (BEST-PTA).
This Bayesian estimation framework uses a combination of optimization, supervised learning, and unsupervised learning components that are trained and synthesized based on a given set of example trajectories. Through an evaluation on three human motion datasets, I show that BEST-PTA reduces alignment error when compared to state-of-the-art baselines. Furthermore, I demonstrate that this improved alignment reduces human motion prediction error. Lastly, in order to assess the utility of the developed methods for improving safety and efficiency in HRI, I introduce an integrated framework combining prediction with robot planning in time. I describe an implementation and evaluation of this framework on a real physical system. Through this demonstration, I show that the developed approach leads to automatically derived adaptive robot behavior. I show that the developed framework leads to improvements in quantitative metrics of safety and efficiency with the use of a simulated evaluation.
"Funded by the NASA Space Technology Research Fellowship Program and the National Science Foundation"--Page 6
by Przemyslaw A. Lasota.
Ph. D. in Autonomous Systems
Ph.D.inAutonomousSystems Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Gli stili APA, Harvard, Vancouver, ISO e altri
49

DE, PACE FRANCESCO. "Natural and multimodal interfaces for human-machine and human-robot interaction". Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2918004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Ashcraft, C. Chace. "Moderating Influence as a Design Principle for Human-Swarm Interaction". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7406.

Testo completo
Abstract (sommario):
Robot swarms have recently become of interest in both industry and academia for their potential to perform various difficult or dangerous tasks efficiently. As real robot swarms become more of a possibility, many desire swarms to be controlled or directed by a human, which raises questions regarding how that should be done. Part of the challenge of human-swarm interaction is the difficulty of understanding swarm state and how to drive the swarm to produce emergent behaviors. Human input could inhibit desirable swarm behaviors if their input is poor and has sufficient influence over swarm agents, affecting its overall performance. Thus, with too little influence, human input is useless, but with too much, it can be destructive. We suggest that there is some middle level, or interval, of human influence that allows the swarm to take advantage of useful human input while minimizing the effect of destructive input. Further, we propose that human-swarm interaction schemes can be designed to maintain an appropriate level of human influence over the swarm and maintain or improve swarm performance in the presence of both useful and destructive human input. We test this theory by implementing a piece of software to dynamically moderate influence and then testing it with a simulated honey bee colony performing nest site selection, simulated human input, and actual human input via a user study. The results suggest that moderating influence, as suggested, is important for maintaining high performance in the presence of both useful and destructive human input. However, while our software seems to successfully moderate influence with simulated human input, it fails to do so with actual human input.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia