Literatura académica sobre el tema "Human-robot interaction"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Human-robot interaction".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Human-robot interaction"

1

Takamatsu, Jun. "Human-Robot Interaction". Journal of the Robotics Society of Japan 37, n.º 4 (2019): 293–96. http://dx.doi.org/10.7210/jrsj.37.293.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Jia, Yunyi, Biao Zhang, Miao Li, Brady King y Ali Meghdari. "Human-Robot Interaction". Journal of Robotics 2018 (1 de octubre de 2018): 1–2. http://dx.doi.org/10.1155/2018/3879547.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Murphy, Robin, Tatsuya Nomura, Aude Billard y Jennifer Burke. "Human–Robot Interaction". IEEE Robotics & Automation Magazine 17, n.º 2 (junio de 2010): 85–89. http://dx.doi.org/10.1109/mra.2010.936953.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Sethumadhavan, Arathi. "Human-Robot Interaction". Ergonomics in Design: The Quarterly of Human Factors Applications 20, n.º 3 (julio de 2012): 27–28. http://dx.doi.org/10.1177/1064804612449796.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sheridan, Thomas B. "Human–Robot Interaction". Human Factors: The Journal of the Human Factors and Ergonomics Society 58, n.º 4 (20 de abril de 2016): 525–32. http://dx.doi.org/10.1177/0018720816644364.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Jones, Keith S. y Elizabeth A. Schmidlin. "Human-Robot Interaction". Reviews of Human Factors and Ergonomics 7, n.º 1 (25 de agosto de 2011): 100–148. http://dx.doi.org/10.1177/1557234x11410388.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Thomaz, Andrea, Guy Hoffman y Maya Cakmak. "Computational Human-Robot Interaction". Foundations and Trends in Robotics 4, n.º 2-3 (2016): 104–223. http://dx.doi.org/10.1561/2300000049.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Karniel, Amir, Angelika Peer, Opher Donchin, Ferdinando A. Mussa-Ivaldi y Gerald E. Loeb. "Haptic Human-Robot Interaction". IEEE Transactions on Haptics 5, n.º 3 (2012): 193–95. http://dx.doi.org/10.1109/toh.2012.47.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Pook, Polly K. y Dana H. Ballard. "Deictic human/robot interaction". Robotics and Autonomous Systems 18, n.º 1-2 (julio de 1996): 259–69. http://dx.doi.org/10.1016/0921-8890(95)00080-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Young, James E., JaYoung Sung, Amy Voida, Ehud Sharlin, Takeo Igarashi, Henrik I. Christensen y Rebecca E. Grinter. "Evaluating Human-Robot Interaction". International Journal of Social Robotics 3, n.º 1 (1 de octubre de 2010): 53–67. http://dx.doi.org/10.1007/s12369-010-0081-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Human-robot interaction"

1

Kruse, Thibault. "Planning for human robot interaction". Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30059/document.

Texto completo
Resumen
Les avancées récentes en robotique inspirent des visions de robots domestiques et de service rendant nos vies plus faciles et plus confortables. De tels robots pourront exécuter différentes tâches de manipulation d'objets nécessaires pour des travaux de ménage, de façon autonome ou en coopération avec des humains. Dans ce rôle de compagnon humain, le robot doit répondre à de nombreuses exigences additionnelles comparées aux domaines bien établis de la robotique industrielle. Le but de la planification pour les robots est de parvenir à élaborer un comportement visant à satisfaire un but et qui obtient des résultats désirés et dans de bonnes conditions d'efficacité. Mais dans l'interaction homme-robot (HRI), le comportement robot ne peut pas simplement être jugé en termes de résultats corrects, mais il doit être agréable aux acteurs humains. Cela signifie que le comportement du robot doit obéir à des critères de qualité supplémentaire. Il doit être sûr, confortable pour l'homme, et être intuitivement compris. Il existe des pratiques pour assurer la sécurité et offrir un confort en gardant des distances suffisantes entre le robot et des personnes à proximité. Toutefois fournir un comportement qui est intuitivement compris reste un défi. Ce défi augmente considérablement dans les situations d'interaction homme-robot dynamique, où les actions de la personne sont imprévisibles, le robot devant adapter en permanence ses plans aux changements. Cette thèse propose une approche nouvelle et des méthodes pour améliorer la lisibilité du comportement du robot dans des situations dynamiques. Cette approche ne considère pas seulement la qualité d'un seul plan, mais le comportement du robot qui est parfois le résultat de replanifications répétées au cours d'une interaction. Pour ce qui concerne les tâches de navigation, cette thèse présente des fonctions de coûts directionnels qui évitent les problèmes dans des situations de conflit. Pour la planification d'action en général, cette thèse propose une approche de replanification locale des actions de transport basé sur les coûts de navigation, pour élaborer un comportement opportuniste adaptatif. Les deux approches, complémentaires, facilitent la compréhension, par les acteurs et observateurs humains, des intentions du robot et permettent de réduire leur confusion
The recent advances in robotics inspire visions of household and service robots making our lives easier and more comfortable. Such robots will be able to perform several object manipulation tasks required for household chores, autonomously or in cooperation with humans. In that role of human companion, the robot has to satisfy many additional requirements compared to well established fields of industrial robotics. The purpose of planning for robots is to achieve robot behavior that is goal-directed and establishes correct results. But in human-robot-interaction, robot behavior cannot merely be judged in terms of correct results, but must be agree-able to human stakeholders. This means that the robot behavior must suffice additional quality criteria. It must be safe, comfortable to human, and intuitively be understood. There are established practices to ensure safety and provide comfort by keeping sufficient distances between the robot and nearby persons. However providing behavior that is intuitively understood remains a challenge. This challenge greatly increases in cases of dynamic human-robot interactions, where the actions of the human in the future are unpredictable, and the robot needs to constantly adapt its plans to changes. This thesis provides novel approaches to improve the legibility of robot behavior in such dynamic situations. Key to that approach is not to merely consider the quality of a single plan, but the behavior of the robot as a result of replanning multiple times during an interaction. For navigation planning, this thesis introduces directional cost functions that avoid problems in conflict situations. For action planning, this thesis provides the approach of local replanning of transport actions based on navigational costs, to provide opportunistic behavior. Both measures help human observers understand the robot's beliefs and intentions during interactions and reduce confusion
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bodiroža, Saša. "Gestures in human-robot interaction". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17705.

Texto completo
Resumen
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt.
Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Miners, William Ben. "Toward Understanding Human Expression in Human-Robot Interaction". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.

Texto completo
Resumen
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving.

An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.

Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.

This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.

The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Akan, Batu. "Human Robot Interaction Solutions for Intuitive Industrial Robot Programming". Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14315.

Texto completo
Resumen
Over the past few decades the use of industrial robots has increased the efficiency as well as competitiveness of many companies. Despite this fact, in many cases, robot automation investments are considered to be technically challenging. In addition, for most small and medium sized enterprises (SME) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods for industrial robots are too complex for an inexperienced robot programmer, thus assistance from a robot programming expert is often needed.  We hypothesize that in order to make industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis we propose a high-level natural language framework for interacting with industrial robots through an instructional programming environment for the user.  The ultimate goal of this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.In this thesis we mainly address two issues. The first issue is to make interaction with a robot easier and more natural through a multimodal framework. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than programming issues of the robot. This approach shifts the focus of industrial robot programming from the coordinate based programming paradigm, which currently dominates the field, to an object based programming scheme.The second issue addressed is a general framework for implementing multimodal interfaces. There have been numerous efforts to implement multimodal interfaces for computers and robots, but there is no general standard framework for developing them. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline and includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.
robot colleague project
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Topp, Elin Anna. "Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping". Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Huang, Chien-Ming. "Joint attention in human-robot interaction". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.

Texto completo
Resumen
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bremner, Paul. "Conversational gestures in human-robot interaction". Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557106.

Texto completo
Resumen
Humanoid service robotics is a rapidly developing field of research. One desired purpose of such service robots is for them to be able to interact and cooperate with people. In order for them to be able to do so successfully they need to be able to communicate effectively. One way of achieving this is for humanoid robots to communicate in a human-like way resulting in easier, more familiar and ultimately more successful human-robot interaction. An integral part of human communications is co-verbal gesture; thus, investigation into a means of their production and whether they engender the desired effects is undertaken in this thesis. In order for gestures to be able to be produced using BERTI (Bristol and Elumotion Robotic Torso I), the robot designed and built for this work, a means of coordinating the joints to produce the required hand motions was necessary. A relatively simple method for doing so is proposed which produces motion that shares characteristics with proposed mathematical models for human arm movements, i.e., smooth and direct motion. It was then investigated whether, as hypothesised, gestures produced using this method were recognisable and positively perceived by users. A series of user studies showed that the gestures were indeed as recognisable as their human counterparts, and positively perceived. In order to enable users to form more confident opinions of the gestures, investigate whether improvements in human-likeness would affect user perceptions, and enable investigation into the affects of robotic gestures on listener behaviour, methods for producing gesture sequences were developed. Sufficient procedural information for gesture production was not present in the anthropological literature, so empirical evidence was sought from monologue performances. This resulted in a novel set of rules for production of beat gestures (a key type of co-verbal gesture), as well as some other important procedural methods; these were used to produce a two minute monologue with accompanying gestures. A user study carried out using this monologue reinforced the previous finding that positively perceived gestures were produced. It also showed that gesture sequences using beat gestures generated using the rules, were not significantly preferable to those containing only naively selected pre-scripted beat gestures. This demonstrated that minor improvements in human-likeness offered no significant benefit in user perception. Gestures have been shown to have positive effects on listener engagement and memory (of the accompanied speech) in anthropological studies. In this thesis the hypothesis that similar effects would be observed when BERTI performed co-verbal gestures was investigated. It was found that there was a highly significant improvement in user engagement, as well as a significant improvement in certainty of data recalled. Thus, some of the expected effects of co-verbal gesture were observed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Fiore, Michelangelo. "Decision Making in Human-Robot Interaction". Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0049/document.

Texto completo
Resumen
Un intérêt croissant est aujourd'hui porté sur les robots capables de conduire des activités de collaboration d'une manière naturelle et efficace. Nous avons développé une architecture et un système qui traitent des aspects décisionnels de ce problème. Nous avons mis en oeuvre cette architecture pour traiter trois problèmes différents: le robot observateur, le robot équipier et enfin le robot instructeur. Dans cette thèse, nous discutons des défis et problématiques de la coopération homme-robot, puis nous décrivons l'architecture que nous avons développée et enfin détaillons sa mise oeuvre et les algorithmiques spécifiques à chacun des scénarios.Dans le cadre du scénario du robot observateur, le robot maintient un état du monde à jour au moyen d'un raisonnement géométrique effectué sur les données de perception, produisant ainsi une description symbolique de l'état du monde et des agents présents. Nous montrons également, sur la base d'un système de raisonnement intégrant des processus de décision de Markov (MDPs) et des réseaux Bayésiens, comment le robot est capable d'inférer les intentions et les actions futures de ses partenaires humain, à partir d'une observation de leurs mouvements relatifs aux objets de l'environnement. Nous identifions deux types de comportements proactifs : corriger les croyances de l'homme en lui fournissant l'information pertinente qui lui permettra de réaliser son but, aider physiquement la personne dans la réalisation de sa tâche, une fois celle-ci identifiée par le robot.Dans le cas du robot équipier, ce dernier doir réaliser une tâche en coopération avec un partenaire human. Nous introduisons un planificateur nommé Human-Aware Task Planner et détaillons la gestion par notre systeme du plan partagé par un composant appelé Plan Management component. Grâce à se système, le robot peut collaborer avec les hommes selon trois modalités différentes : robot leader, human leader, ou equal partners. Nous discutons des fonctions qui permettent au robot de suivre les actions de son partenaire humain et de vérifier qu'elles sont compatibles ou non avec le plan partagé et nous montrons comment le robot est capable de produire des comportements sûrs qui permettent de réaliser la tâche en prenant en compte de manière explicite la présence et les actions de l'homme ainsi que ses préférences. L'approche est fondée sur des processus décisionnels de Markov hiérarchisés avec observabilité mixte et permet d'estimer l'engagement de l'homme et de réagir en conséquence à différents niveaux d'abstraction. Enfin, nous discutions d'une approche prospective fondée sur un planificateur multi-agent probabiliste mettant en œuvre des MDPs et de sa pertinence quant à l'amélioration du composant de gestion de plan partagé.Dans le scénario du robot instructeur, nous détaillons les processus décisionnels qui permettent au robot d'adapter le plan partagé (shared plan) en fonction de l'état de connaissance et des désirs de son partenaire humain. Selon, le cas, le robot donne plus ou moins de détails sur le plan et adapte son comportement aux connaissances de l'homme ; Une étude utilisateur a également été menée permettant de valider la pertinence de cette approche.Finalement, nous présentons la mise en œuvre d'un robot guide autonome et détaillons les processu décisionnels que nous y avons intégrés pour lui permettre de guider des voyageurs dans un hall d'aéroport en s'adaptant au mieux au contexte et aux désirs des personnes guidées. Nous illustrons dans ce contexte des comportement adaptatifs et pro-actifs. Ce système a été effectivement intégré sur le robot Spencer qui a été déployé dans le terminal principal de l'aéroport d'Amsterdam (Schiphol). Le robot a fonctionné de manière robuste et satisfaisante. Une étude utilisateur a permis, dans ce cas également, de mesurer les performances et de valider le système
There has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Alanenpää, Madelene. "Gaze detection in human-robot interaction". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.

Texto completo
Resumen
The aim of this thesis is to track gaze direction in a human-robot interaction scenario.The human-robot interaction consisted of a participant playing a geographic gamewith three important objects on which participants could focus: A tablet, a sharedtouchscreen, and a robot (called Furhat). During the game, the participant wasequipped with eye-tracking glasses. These collected a first-person view video as wellas annotations consisting of the participant's center of gaze. In this thesis, I aim to usethis data to detect the three important objects described above from the first-personvideo stream and discriminate whether the gaze of the person fell on one of theobjects of importance and for how long. To achieve this, I trained an accurate and faststate-of-the-art object detector called YOLOv4. To ascertain that this was thecorrect object detector, for this thesis, I compared YOLOv4 with its previousversion, YOLOv3, in terms of accuracy and run time. YOLOv4 was trained with adata set of 337 images consisting of various pictures of tablets, television screens andthe Furhat robot.The trained program was used to extract the relevant objects for each frame of theeye-tracking video and a parser was used to discriminate whether the gaze of theparticipant fell on the relevant objects and for how long. The result is a system thatcould determine, with an accuracy of 90.03%, what object the participant is looking atand for how long the participant is looking at that object.Tryckt av:
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Almeida, Luís Miguel Martins. "Human-robot interaction for object transfer". Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22374.

Texto completo
Resumen
Mestrado em Engenharia Mecânica
Robots come into physical contact with humans under a variety of circumstances to perform useful work. This thesis has the ambitious aim of contriving a solution that leads to a simple case of physical human-robot interaction, an object transfer task. Firstly, this work presents a review of the current research within the field of Human-Robot Interaction, where two approaches are distinguished, but simultaneously required: a pre-contact approximation and an interaction by contact. Further, to achieve the proposed objectives, this dissertation addresses a possible answer to three major problems: (1) The robot control to perform the inherent movements of the transfer assignment, (2) the human-robot pre interaction and (3) the interaction by contact. The capabilities of a 3D sensor and force/tactile sensors are explored in order to prepare the robot to handover an object and to control the robot gripper actions, correspondingly. The complete software development is supported by the Robot Operating System (ROS) framework. Finally, some experimental tests are conducted to validate the proposed solutions and to evaluate the system's performance. A possible transfer task is achieved, even if some refinements, improvements and extensions are required to improve the solution performance and range.
Os robôs entram em contacto físico com os humanos sob uma variedade de circunstâncias para realizar trabalho útil. Esta dissertação tem como objetivo o desenvolvimento de uma solução que permita um caso simples de interação física humano-robô, uma tarefa de transferência de objetos. Inicialmente, este trabalho apresenta uma revisão da pesquisa corrente na área da interação humano-robô, onde duas abordagens são distinguíveis, mas simultaneamente necessárias: uma aproximação pré-contacto e uma interação pós-contacto. Seguindo esta linha de pensamento, para atingir os objetivos propostos, esta dissertação procura dar resposta a três grandes problemas: (1) O controlo do robô para que este desempenhe os movimentos inerentes á tarefa de transferência, (2) a pré-interação humano-robô e (3) a interação por contacto. As capacidades de um sensor 3D e de sensores de força são exploradas com o objetivo de preparar o robô para a transferência e de controlar as ações da garra robótica, correspondentemente. O desenvolvimento de arquitetura software é suportado pela estrutura Robot Operating System (ROS). Finalmente, alguns testes experimentais são realizados para validar as soluções propostas e para avaliar o desempenho do sistema. Uma possível transferência de objetos é alcançada, mesmo que sejam necessários alguns refinamentos, melhorias e extensões para melhorar o desempenho e abrangência da solução.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Human-robot interaction"

1

Jost, Céline, Brigitte Le Pévédic, Tony Belpaeme, Cindy Bethel, Dimitrios Chrysostomou, Nigel Crook, Marine Grandgeorge y Nicole Mirnig, eds. Human-Robot Interaction. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42307-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Mansour, Rahimi y Karwowski Waldemar 1953-, eds. Human-robot interaction. London: Taylor & Francis, 1992.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Prassler, Erwin, Gisbert Lawitzky, Andreas Stopp, Gerhard Grunwald, Martin Hägele, Rüdiger Dillmann y Ioannis Iossifidis, eds. Advances in Human-Robot Interaction. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/b97960.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Goodrich, Michael A. Human-robot interaction: A survey. Hanover: Now Publishers, 2007.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Xing, Bo y Tshilidzi Marwala. Smart Maintenance for Human–Robot Interaction. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-67480-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ayanoğlu, Hande y Emília Duarte, eds. Emotional Design in Human-Robot Interaction. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-96722-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Dautenhahn, Kerstin y Joe Saunders, eds. New Frontiers in Human–Robot Interaction. Amsterdam: John Benjamins Publishing Company, 2011. http://dx.doi.org/10.1075/ais.2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Wang, Xiangyu, ed. Mixed Reality and Human-Robot Interaction. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-0582-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Wang, Xiangyu. Mixed Reality and Human-Robot Interaction. Dordrecht: Springer Science+Business Media B.V., 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Shiomi, Masahiro y Hidenobu Sumioka. Social Touch in Human–Robot Interaction. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003384274.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Human-robot interaction"

1

Sidobre, Daniel, Xavier Broquère, Jim Mainprice, Ernesto Burattini, Alberto Finzi, Silvia Rossi y Mariacarla Staffa. "Human–Robot Interaction". En Springer Tracts in Advanced Robotics, 123–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29041-1_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Billard, Aude y Daniel Grollman. "Human-Robot Interaction". En Encyclopedia of the Sciences of Learning, 1474–76. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_760.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ohnishi, Kouhei. "Human–Robot Interaction". En Mechatronics and Robotics, 255–64. Boca Raton : CRC Press, 2020.: CRC Press, 2020. http://dx.doi.org/10.1201/9780429347474-12.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ayanoğlu, Hande y João S. Sequeira. "Human-Robot Interaction". En Human–Computer Interaction Series, 39–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-96722-6_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Feil-Seifer, David y Maja J. Matarić. "Human-Robot Interaction". En Encyclopedia of Complexity and Systems Science, 1–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-642-27737-5_274-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Edwards, Autumn. "Human–Robot Interaction". En The Sage Handbook of Human–Machine Communication, 167–77. 1 Oliver's Yard, 55 City Road London EC1Y 1SP: SAGE Publications Ltd, 2023. http://dx.doi.org/10.4135/9781529782783.n21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Esterwood, Connor, Qiaoning Zhang, X. Jessie Yang y Lionel P. Robert. "Human–Robot Interaction". En Human-Computer Interaction in Intelligent Environments, 305–32. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003490685-10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ir, André Pirlet. "The Role of Standardization in Technical Regulations". En Human–Robot Interaction, 1–8. Boca Raton, FL : CRC Press/Taylor & Francis Group, [2019]: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9781315213781-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Takács, Árpád, Imre J. Rudas y Tamás Haidegger. "The Other End of Human–Robot Interaction". En Human–Robot Interaction, 137–70. Boca Raton, FL : CRC Press/Taylor & Francis Group, [2019]: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9781315213781-10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Lőrincz, Márton. "Passive Bilateral Teleoperation with Safety Considerations". En Human–Robot Interaction, 171–86. Boca Raton, FL : CRC Press/Taylor & Francis Group, [2019]: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9781315213781-11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Human-robot interaction"

1

Billings, Deborah R., Kristin E. Schaefer, Jessie Y. C. Chen y Peter A. Hancock. "Human-robot interaction". En the seventh annual ACM/IEEE international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2157689.2157709.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

"Human robot interaction". En 2016 9th International Conference on Human System Interactions (HSI). IEEE, 2016. http://dx.doi.org/10.1109/hsi.2016.7529627.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

St-Onge, David, Nicolas Reeves y Nataliya Petkova. "Robot-Human Interaction". En HRI '17: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3029798.3034785.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Han, Ji, Gopika Ajaykumar, Ze Li y Chien-Ming Huang. "Structuring Human-Robot Interactions via Interaction Conventions". En 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 2020. http://dx.doi.org/10.1109/ro-man47096.2020.9223468.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

"Human, Robot and Interaction". En 2019 IEEE International Conference on Industrial Cyber Physical Systems (ICPS). IEEE, 2019. http://dx.doi.org/10.1109/icphys.2019.8780335.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sandygulova, Anara, Abraham G. Campbell, Mauro Dragone y G. M. P. O'Hare. "Immersive human-robot interaction". En the seventh annual ACM/IEEE international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2157689.2157768.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Budkov, V. Yu, M. V. Prischepa, A. L. Ronzhin y A. A. Karpov. "Multimodal human-robot interaction". En 2010 International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT 2010). IEEE, 2010. http://dx.doi.org/10.1109/icumt.2010.5676593.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Scimeca, Luca, Fumiya Iida, Perla Maiolino y Thrishantha Nanayakkara. "Human-Robot Medical Interaction". En HRI '20: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3371382.3374847.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

"Physical Human-Robot Interaction". En 2019 IEEE International Conference on Mechatronics (ICM). IEEE, 2019. http://dx.doi.org/10.1109/icmech.2019.8722848.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bartneck, Christoph y Jodi Forlizzi. "Shaping human-robot interaction". En Extended abstracts of the 2004 conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/985921.986205.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Human-robot interaction"

1

Arkin, Ronald C. y Lilia Moshkina. Affect in Human-Robot Interaction. Fort Belvoir, VA: Defense Technical Information Center, enero de 2014. http://dx.doi.org/10.21236/ada593747.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Martinson, E. y W. Lawson. Learning Speaker Recognition Models through Human-Robot Interaction. Fort Belvoir, VA: Defense Technical Information Center, mayo de 2011. http://dx.doi.org/10.21236/ada550036.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Manring, Levi H., John Monroe Pederson y Dillon Gabriel Potts. Improving Human-Robot Interaction and Control Through Augmented Reality. Office of Scientific and Technical Information (OSTI), agosto de 2018. http://dx.doi.org/10.2172/1467198.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jiang, Shu y Ronald C. Arkin. Mixed-Initiative Human-Robot Interaction: Definition, Taxonomy, and Survey. Fort Belvoir, VA: Defense Technical Information Center, enero de 2015. http://dx.doi.org/10.21236/ada620347.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Scholtz, Jean, Jeff Young, Holly A. Yanco y Jill L. Drury. Evaluation of Human-Robot Interaction Awareness in Search and Rescue. Fort Belvoir, VA: Defense Technical Information Center, enero de 2006. http://dx.doi.org/10.21236/ada456128.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Bagchi, Shelly, Murat Aksu, Megan Zimmerman, Jeremy A. Marvel, Brian Antonishek, Heni Ben Amor, Terry Fong, Ross Mead y Yue Wang. Workshop Report: Test Methods and Metrics for Effective HRI in Collaborative Human-Robot Teams, ACM/IEEE Human-Robot Interaction Conference, 2019. National Institute of Standards and Technology, diciembre de 2020. http://dx.doi.org/10.6028/nist.ir.8339.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bagchi, Shelly, Jeremy A. Marvel, Megan Zimmerman, Murat Aksu, Brian Antonishek, Heni Ben Amor, Terry Fong, Ross Mead y Yue Wang. Workshop Report: Test Methods and Metrics for Effective HRI in Real-World Human-Robot Teams, ACM/IEEE Human-Robot Interaction Conference, 2020 (Virtual). National Institute of Standards and Technology, enero de 2021. http://dx.doi.org/10.6028/nist.ir.8345.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Schaefer, Kristin E., Deborah R. Billings, James L. Szalma, Jeffrey K. Adams, Tracy L. Sanders, Jessie Y. Chen y Peter A. Hancock. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human-Robot Interaction. Fort Belvoir, VA: Defense Technical Information Center, julio de 2014. http://dx.doi.org/10.21236/ada607926.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Bagchi, Shelly, Jeremy A. Marvel, Megan Zimmerman, Murat Aksu, Brian Antonishek, Xiang Li, Heni Ben Amor, Terry Fong, Ross Mead y Yue Wang. Workshop Report: Novel and Emerging Test Methods and Metrics for Effective HRI, ACM/IEEE Conference on Human-Robot Interaction, 2021. National Institute of Standards and Technology, febrero de 2022. http://dx.doi.org/10.6028/nist.ir.8417.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Breazael, Cynthia y Brian Scassellati. Infant-Like Social Interactions Between a Robot and a Human Caregiver. Fort Belvoir, VA: Defense Technical Information Center, enero de 2006. http://dx.doi.org/10.21236/ada450357.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía