Dissertations / Theses on the topic 'Human-robot interaction'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Human-robot interaction.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Kruse, Thibault. "Planning for human robot interaction." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30059/document.
Full textThe recent advances in robotics inspire visions of household and service robots making our lives easier and more comfortable. Such robots will be able to perform several object manipulation tasks required for household chores, autonomously or in cooperation with humans. In that role of human companion, the robot has to satisfy many additional requirements compared to well established fields of industrial robotics. The purpose of planning for robots is to achieve robot behavior that is goal-directed and establishes correct results. But in human-robot-interaction, robot behavior cannot merely be judged in terms of correct results, but must be agree-able to human stakeholders. This means that the robot behavior must suffice additional quality criteria. It must be safe, comfortable to human, and intuitively be understood. There are established practices to ensure safety and provide comfort by keeping sufficient distances between the robot and nearby persons. However providing behavior that is intuitively understood remains a challenge. This challenge greatly increases in cases of dynamic human-robot interactions, where the actions of the human in the future are unpredictable, and the robot needs to constantly adapt its plans to changes. This thesis provides novel approaches to improve the legibility of robot behavior in such dynamic situations. Key to that approach is not to merely consider the quality of a single plan, but the behavior of the robot as a result of replanning multiple times during an interaction. For navigation planning, this thesis introduces directional cost functions that avoid problems in conflict situations. For action planning, this thesis provides the approach of local replanning of transport actions based on navigational costs, to provide opportunistic behavior. Both measures help human observers understand the robot's beliefs and intentions during interactions and reduce confusion
Bodiroža, Saša. "Gestures in human-robot interaction." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17705.
Full textGestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
Miners, William Ben. "Toward Understanding Human Expression in Human-Robot Interaction." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.
Full textAn intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.
Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.
This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.
The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
Akan, Batu. "Human Robot Interaction Solutions for Intuitive Industrial Robot Programming." Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14315.
Full textrobot colleague project
Topp, Elin Anna. "Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping." Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.
Full textHuang, Chien-Ming. "Joint attention in human-robot interaction." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.
Full textBremner, Paul. "Conversational gestures in human-robot interaction." Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557106.
Full textFiore, Michelangelo. "Decision Making in Human-Robot Interaction." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0049/document.
Full textThere has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users
Alanenpää, Madelene. "Gaze detection in human-robot interaction." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.
Full textAlmeida, Luís Miguel Martins. "Human-robot interaction for object transfer." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22374.
Full textRobots come into physical contact with humans under a variety of circumstances to perform useful work. This thesis has the ambitious aim of contriving a solution that leads to a simple case of physical human-robot interaction, an object transfer task. Firstly, this work presents a review of the current research within the field of Human-Robot Interaction, where two approaches are distinguished, but simultaneously required: a pre-contact approximation and an interaction by contact. Further, to achieve the proposed objectives, this dissertation addresses a possible answer to three major problems: (1) The robot control to perform the inherent movements of the transfer assignment, (2) the human-robot pre interaction and (3) the interaction by contact. The capabilities of a 3D sensor and force/tactile sensors are explored in order to prepare the robot to handover an object and to control the robot gripper actions, correspondingly. The complete software development is supported by the Robot Operating System (ROS) framework. Finally, some experimental tests are conducted to validate the proposed solutions and to evaluate the system's performance. A possible transfer task is achieved, even if some refinements, improvements and extensions are required to improve the solution performance and range.
Os robôs entram em contacto físico com os humanos sob uma variedade de circunstâncias para realizar trabalho útil. Esta dissertação tem como objetivo o desenvolvimento de uma solução que permita um caso simples de interação física humano-robô, uma tarefa de transferência de objetos. Inicialmente, este trabalho apresenta uma revisão da pesquisa corrente na área da interação humano-robô, onde duas abordagens são distinguíveis, mas simultaneamente necessárias: uma aproximação pré-contacto e uma interação pós-contacto. Seguindo esta linha de pensamento, para atingir os objetivos propostos, esta dissertação procura dar resposta a três grandes problemas: (1) O controlo do robô para que este desempenhe os movimentos inerentes á tarefa de transferência, (2) a pré-interação humano-robô e (3) a interação por contacto. As capacidades de um sensor 3D e de sensores de força são exploradas com o objetivo de preparar o robô para a transferência e de controlar as ações da garra robótica, correspondentemente. O desenvolvimento de arquitetura software é suportado pela estrutura Robot Operating System (ROS). Finalmente, alguns testes experimentais são realizados para validar as soluções propostas e para avaliar o desempenho do sistema. Uma possível transferência de objetos é alcançada, mesmo que sejam necessários alguns refinamentos, melhorias e extensões para melhorar o desempenho e abrangência da solução.
Kaupp, Tobias. "Probabilistic Human-Robot Information Fusion." Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/2554.
Full textKaupp, Tobias. "Probabilistic Human-Robot Information Fusion." University of Sydney, 2008. http://hdl.handle.net/2123/2554.
Full textThis thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
Ali, Muhammad. "Contribution to decisional human-robot interaction: towards collaborative robot companions." Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00719684.
Full textAli, Muhammad. "Contributions to decisional human-robot interaction : towards collaborative robot companions." Thesis, Toulouse, INSA, 2012. http://www.theses.fr/2012ISAT0003/document.
Full textHuman Robot Interaction is entering into the interesting phase where the relationship with a robot is envisioned more as one of companionship with the human partner than a mere master-slave relationship. For this to become a reality, the robot needs to understand human behavior and not only react appropriately but also be socially proactive. A Companion Robot will also need to collaborate with the human in his daily life and will require a reasoning mechanism to manage thecollaboration and also handle the uncertainty in the human intention to engage and collaborate. In this work, we will identify key elements of such interaction in the context of a collaborative activity, with special focus on how humans successfully collaborate to achieve a joint action. We will show application of these elements in a robotic system to enrich its social human robot interaction aspect of decision making. In this respect, we provide a contribution to managing robot high-level goals and proactive behavior and a description of a coactivity decision model for collaborative human robot task. Also, a HRI user study demonstrates the importance of timing a verbal communication in a proactive human robot joint action
Mazhar, Osama. "Vision-based human gestures recognition for human-robot interaction." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS044.
Full textIn the light of factories of the future, to ensure productive, safe and effective interaction between robot and human coworkers, it is imperative that the robot extracts the essential information of the coworker. To address this, deep learning solutions are explored and a reliable human gesture detection framework is developed in this work. Our framework is able to robustly detect static hand gestures plus upper-body dynamic gestures.For static hand gestures detection, openpose is integrated with Kinect V2 to obtain a pseudo-3D human skeleton. With the help of 10 volunteers, we recorded an image dataset opensign, that contains Kinect V2 RGB and depth images of 10 alpha-numeric static hand gestures taken from the American Sign Language. "Inception V3" neural network is adapted and trained to detect static hand gestures in real-time.Subsequently, we extend our gesture detection framework to recognize upper-body dynamic gestures. A spatial attention based dynamic gestures detection strategy is proposed that employs multi-modal "Convolutional Neural Network - Long Short-Term Memory" deep network to extract spatio-temporal dependencies in pure RGB video sequences. The exploited convolutional neural network blocks are pre-trained on our static hand gestures dataset opensign, which allow efficient extraction of hand features. Our spatial attention module focuses on large-scale movements of upper limbs plus on hand images for subtle hand/fingers movements, to efficiently distinguish gestures classes.This module additionally exploits 2D upper-body pose to estimate distance of user from the sensor for scale-normalization plus determine the parameters of hands bounding boxes without a need of depth sensor. The information typically extracted from a depth camera in similar strategies is learned from opensign dataset. Thus the proposed gestures recognition strategy can be implemented on any system with a monocular camera.Afterwards, we briefly explore 3D human pose estimation strategies for monocular cameras. To estimate 3D human pose, a hybrid strategy is proposed which combines the merits of discriminative 2D pose estimators with that of model based generative approaches. Our method optimizes an objective function, that minimizes the discrepancy between position & scale-normalized 2D pose obtained from openpose, and a virtual 2D projection of a kinematic human model.For real-time human-robot interaction, an asynchronous distributed system is developed to integrate our static hand gestures detector module with an open-source physical human-robot interaction library OpenPHRI. We validate performance of the proposed framework through a teach by demonstration experiment with a robotic manipulator
Jou, Yung-Tsan. "Human-Robot Interactive Control." Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1082060744.
Full textKuo, I.-Han. "Designing Human-Robot Interaction for service applications." Thesis, University of Auckland, 2012. http://hdl.handle.net/2292/19438.
Full textPonsler, Brett. "Recognizing Engagement Behaviors in Human-Robot Interaction." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/109.
Full textHolroyd, Aaron. "Generating Engagement Behaviors in Human-Robot Interaction." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/328.
Full textMarín, Urías Luis Felipe. "Reasoning about space for human-robot interaction." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/1195/.
Full textHuman Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective". In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings. In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks: - Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction. - A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
Valibeik, Salman. "Human robot interaction in a crowded environment." Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/5677.
Full textAhmed, Muhammad Rehan. "Compliance Control of Robot Manipulator for Safe Physical Human Robot Interaction." Doctoral thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-13986.
Full textToris, Russell C. "Bringing Human-Robot Interaction Studies Online via the Robot Management System." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/1058.
Full textNitz, Pettersson Hannes, and Samuel Vikström. "VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.
Full textAmeri, Ekhtiarabadi Afshin. "Unified Incremental Multimodal Interface for Human-Robot Interaction." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.
Full textRobot Colleague
Vogt, David. "Learning Continuous Human-Robot Interactions from Human-Human Demonstrations." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-233262.
Full textPeltason, Julia [Verfasser]. "Modeling Human-Robot-Interaction based on generic Interaction Patterns / Julia Peltason." Bielefeld : Universitätsbibliothek Bielefeld, 2014. http://d-nb.info/1052650937/34.
Full textNajar, Anis. "Shaping robot behaviour with unlabeled human instructions." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066152.
Full textMost of current interactive learning systems rely on predefined protocols that constrain the interaction with the user. Relaxing the constraints of interaction protocols can therefore improve the usability of these systems.This thesis tackles the question of interpreting human instructions, in order to relax the constraints about predetermining their meanings. We propose a framework that enables a human teacher to shape a robot behaviour, by interactively providing it with unlabeled instructions. Our approach consists in grounding the meaning of instruction signals in the task learning process, and using them simultaneously for guiding the latter. This approach has a two-fold advantage. First, it provides more freedom to the teacher in choosing his preferred signals. Second, it reduces the required engineering efforts, by removing the necessity to encode the meaning of each instruction signal. We implement our framework as a modular architecture, named TICS, that offers the possibility to combine different information sources: a predefined reward function, evaluative feedback and unlabeled instructions. This allows for more flexibility in the teaching process, by enabling the teacher to switch between different learning modes. Particularly, we propose several methods for interpreting instructions, and a new method for combining evaluative feedback with a predefined reward function. We evaluate our framework through a series of experiments, performed both in simulation and with real robots. The experimental results demonstrate the effectiveness of our framework in accelerating the task learning process, and in reducing the number of required interactions with the teacher
Lirussi, Igor. "Human-Robot interaction with low computational-power humanoids." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19120/.
Full textMorvan, Jérémy. "Understanding and communicating intentions in human-robot interaction." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166445.
Full textBusch, Baptiste. "Optimization techniques for an ergonomic human-robot interaction." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0027/document.
Full textHuman-Robot Interaction (HRI) is a growing field in the robotic community. By its very nature it brings together researchers from various domains including psychology, sociology and obviously robotics who are shaping and designing the robots people will interact with ona daily basis. As human and robots starts working in a shared environment, the diversity of tasks theycan accomplish together is rapidly increasing. This creates challenges and raises concerns tobe addressed in terms of safety and acceptance of the robotic systems. Human beings havespecific needs and expectations that have to be taken into account when designing robotic interactions. In a sense, there is a strong need for a truly ergonomic human-robot interaction.In this thesis, we propose methods to include ergonomics and human factors in the motions and decisions planning algorithms, to automatize this process of generating an ergonomicinteraction. The solutions we propose make use of cost functions that encapsulate the humanneeds and enable the optimization of the robot’s motions and choices of actions. We haveapplied our method to two common problems of human-robot interaction.First, we propose a method to increase the legibility of the robot motions to achieve abetter understanding of its intentions. Our approach does not require modeling the conceptof legible motions but penalizes the trajectories that leads to late or mispredictions of therobot’s intentions during a live execution of a shared task. In several user studies we achievesubstantial gains in terms of prediction time and reduced interpretation errors.Second, we tackle the problem of choosing actions and planning motions that maximize thephysical ergonomics on the human side. Using a well-accepted ergonomic evaluation functionof human postures, we simulate the actions and motions of both the human and the robot,to accomplish a specific task, while avoiding situations where the human could be at risk interms of working posture. The conducted user studies show that our method leads to saferworking postures and a better perceived interaction
Palathingal, Xavier P. "A framework for long-term human-robot interaction /." abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1446798.
Full text"May, 2007." Includes bibliographical references (leaves 44-46). Online version available on the World Wide Web. Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2007]. 1 microfilm reel ; 35 mm.
Kapellmann-Zafra, Gabriel. "Human-swarm robot interaction with different awareness constraints." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19396/.
Full textDondrup, Christian. "Human-robot spatial interaction using probabilistic qualitative representations." Thesis, University of Lincoln, 2016. http://eprints.lincoln.ac.uk/28665/.
Full textPai, Abhishek. "Distance-Scaled Human-Robot Interaction with Hybrid Cameras." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872095430977.
Full textMangera, Ra'eesah. "Gesture recognition with application to human-robot interaction." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/13732.
Full textChauhan, Aneesh. "Grounding human vocabulary in robot perception through interaction." Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/12841.
Full textThis thesis addresses the problem of word learning in computational agents. The motivation behind this work lies in the need to support language-based communication between service robots and their human users, as well as grounded reasoning using symbols relevant for the assigned tasks. The research focuses on the problem of grounding human vocabulary in robotic agent’s sensori-motor perception. Words have to be grounded in bodily experiences, which emphasizes the role of appropriate embodiments. On the other hand, language is a cultural product created and acquired through social interactions. This emphasizes the role of society as a source of linguistic input. Taking these aspects into account, an experimental scenario is set up where a human instructor teaches a robotic agent the names of the objects present in a visually shared environment. The agent grounds the names of these objects in visual perception. Word learning is an open-ended problem. Therefore, the learning architecture of the agent will have to be able to acquire words and categories in an openended manner. In this work, four learning architectures were designed that can be used by robotic agents for long-term and open-ended word and category acquisition. The learning methods used in these architectures are designed for incrementally scaling-up to larger sets of words and categories. A novel experimental evaluation methodology, that takes into account the openended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. An extensive set of systematic experiments, in multiple experimental settings, was carried out to thoroughly evaluate the described learning approaches. The results indicate that all approaches were able to incrementally acquire new words and categories. Although some of the approaches could not scale-up to larger vocabularies, one approach was shown to learn up to 293 categories, with potential for learning many more.
Esta tese aborda o problema da aprendizagem de palavras em agentes computacionais. A motivação por trás deste trabalho reside na necessidade de suportar a comunicação baseada em linguagem entre os robôs de serviço e os seus utilizadores humanos, bem como suportar o raciocínio baseado em símbolos que sejam relevantes no contexto das tarefas atribuídas e cujo significado seja definido com base na experiência perceptiva. Mais especificamente, o foco da investigação é o problema de estabelecer o significado das palavras na percepção do robô através da interacção homemrobô. A definição do significado das palavras com base em experiências perceptuais e perceptuo-motoras enfatiza o papel da configuração física e perceptuomotora do robô. Entretanto, a língua é um produto cultural criado e adquirido através de interacções sociais. Isso destaca o papel da sociedade como fonte linguística. Tendo em conta estes aspectos, um cenário experimental foi definido no qual um instrutor humano ensina a um agente robótico os nomes dos objectos presentes num ambiente visualmente partilhado. O agente associa os nomes desses objectos à sua percepção visual desses objectos. A aprendizagem de palavras é um problema sem objectivo pré-estabelecido. Nós adquirimos novas palavras ao longo das nossas vidas. Assim, a arquitectura de aprendizagem do agente deve poder adquirir palavras e categorias de uma forma semelhante. Neste trabalho foram concebidas quatro arquitecturas de aprendizagem que podem ser usadas por agentes robóticos para aprendizagem e aquisição de novas palavras e categorias, incrementalmente. Os métodos de aprendizagem utilizados nestas arquitecturas foram projectados para funcionar de forma incremental, acumulando um conjunto cada vez maior de palavras e categorias. É proposta e aplicada uma nova metodologia da avaliação experimental que leva em conta a natureza aberta e incremental da aprendizagem de palavras. Esta metodologia leva em consideração a constatação de que o vocabulário de um robô será limitado pela sua capacidade de discriminação, a qual, por sua vez, depende dos seus sensores e capacidades perceptuais. Foi realizado um extenso conjunto de experiências sistemáticas em múltiplas situações experimentais, para avaliar cuidadosamente estas abordagens de aprendizagem. Os resultados indicam que todas as abordagens foram capazes de adquirir novas palavras e categorias incrementalmente. Embora em algumas das abordagens não tenha sido possível atingir vocabulários maiores, verificou-se que uma das abordagens conseguiu aprender até 293 categorias, com potencial para aprender muitas mais.
Neranon, Paramin. "Human-robot interaction using a behavioural control strategy." Thesis, University of Newcastle upon Tyne, 2015. http://hdl.handle.net/10443/2780.
Full textGARCIA, C. A. C. "Human-Robot Interaction Strategies for Walker-Assisted Locomotion." Universidade Federal do Espírito Santo, 2015. http://repositorio.ufes.br/handle/10/9725.
Full textNeurological and age-related diseases affect human mobility at different levels causing partial or total loss of such faculty. There is a significant need to improve safe and efficient ambulation of patients with gait impairments. In this context, walkers present important benefits for human mobility, improving balance and reducing the load on their lower limbs. Most importantly, walkers induce the use of patients residual mobility capacities in different environments. In the field of robotic technologies for gait assistance, a new category of walkers has emerged, integrating robotic technology, electronics and mechanics. Such devices are known as robotic walkers, intelligent walkers or smart walkers One of the specific and important common aspects to the field of assistive technologies and rehabilitation robotics is the intrinsic interaction between the human and the robot. In this thesis, the concept of Human-Robot Interaction (HRI) for human locomotion assistance is explored. This interaction is composed of two interdependent components. On the one hand, the key role of a robot in a Physical HRI (pHRI) is the generation of supplementary forces to empower the human locomotion. This involves a net flux of power between both actors. On the other hand, one of the crucial roles of a Cognitive HRI (cHRI) is to make the human aware of the possibilities of the robot while allowing him to maintain control of the robot at all times. This doctoral thesis presents a new multimodal human-robot interface for testing and validating control strategies applied to a robotic walkers for assisting human mobility and gait rehabilitation. This interface extracts navigation intentions from a novel sensor fusion method that combines: (i) a Laser Range Finder (LRF) sensor to estimate the users legs kinematics, (ii) wearable Inertial Measurement Unit (IMU) sensors to capture the human and robot orientations and (iii) force sensors measure the physical interaction between the humans upper limbs and the robotic walker. Two close control loops were developed to naturally adapt the walker position and to perform body weight support strategies. First, a force interaction controller generates velocity outputs to the walker based on the upper-limbs physical interaction. Second, a inverse kinematic controller keeps the walker within a desired position to the human improving such interaction. The proposed control strategies are suitable for natural human-robot interaction as shown during the experimental validation. Moreover, methods for sensor fusion to estimate the control inputs were presented and validated. In the experimental studies, the parameters estimation was precise and unbiased. It also showed repeatability when speed changes and continuous turns were performed.
Rahimi, Nohooji Hamed. "Adaptive Neural Control for Safe Human-Robot Interaction." Thesis, Curtin University, 2017. http://hdl.handle.net/20.500.11937/68285.
Full textPASQUALI, DARIO. "Social Engineering Defense Solutions Through Human-Robot Interaction." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1092333.
Full textROSELLI, CECILIA. "Vicarious Sense of Agency in Human-Robot Interaction." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1070568.
Full textLANZA, Francesco. "Human-Robot Teaming Interaction: a Cognitive Architecture Solution." Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/479089.
Full textBeer, Jenay M. "Understanding older adults' perceptions of usefulness of an assistive home robot." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50404.
Full textDevin, Sandra. "Decisional issues during human-robot joint action." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19921/1/DEVIN_Sandra.pdf.
Full textBurke, Michael Glen. "Fast upper body pose estimation for human-robot interaction." Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/256305.
Full textPuehn, Christian G. "Development of a Low-Cost Social Robot for Personalized Human-Robot Interaction." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1427889195.
Full textLasota, Przemyslaw A. (Przemyslaw Andrzej). "Robust human motion prediction for safe and efficient human-robot interaction." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122497.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 175-188).
From robotic co-workers in factories to assistive robots in homes, human-robot interaction (HRI) has the potential to revolutionize a large array of domains by enabling robotic assistance where it was previously not possible. Introducing robots into human-occupied domains, however, requires strong consideration for the safety and efficiency of the interaction. One particularly effective method of supporting safe an efficient human-robot interaction is through the use of human motion prediction. By predicting where a person might reach or walk toward in the upcoming moments, a robot can adjust its motions to proactively resolve motion conflicts and avoid impeding the person's movements. Current approaches to human motion prediction, however, often lack the robustness required for real-world deployment. Many methods are designed for predicting specific types of tasks and motions, and do not necessarily generalize well to other domains.
It is also possible that no single predictor is suitable for predicting motion in a given scenario, and that multiple predictors are needed. Due to these drawbacks, without expert knowledge in the field of human motion prediction, it is difficult to deploy prediction on real robotic systems. Another key limitation of current human motion prediction approaches lies in deficiencies in partial trajectory alignment. Alignment of partially executed motions to a representative trajectory for a motion is a key enabling technology for many goal-based prediction methods. Current approaches of partial trajectory alignment, however, do not provide satisfactory alignments for many real-world trajectories. Specifically, due to reliance on Euclidean distance metrics, overlapping trajectory regions and temporary stops lead to large alignment errors.
In this thesis, I introduce two frameworks designed to improve the robustness of human motion prediction in order to facilitate its use for safe and efficient human-robot interaction. First, I introduce the Multiple-Predictor System (MPS), a datadriven approach that uses given task and motion data in order to synthesize a high performing predictor by automatically identifying informative prediction features and combining the strengths of complementary prediction methods. With the use of three distinct human motion datasets, I show that using the MPS leads to lower prediction error in a variety of HRI scenarios, and allows for accurate prediction for a range of time horizons. Second, in order to address the drawbacks of prior alignment techniques, I introduce the Bayesian ESTimator for Partial Trajectory Alignment (BEST-PTA).
This Bayesian estimation framework uses a combination of optimization, supervised learning, and unsupervised learning components that are trained and synthesized based on a given set of example trajectories. Through an evaluation on three human motion datasets, I show that BEST-PTA reduces alignment error when compared to state-of-the-art baselines. Furthermore, I demonstrate that this improved alignment reduces human motion prediction error. Lastly, in order to assess the utility of the developed methods for improving safety and efficiency in HRI, I introduce an integrated framework combining prediction with robot planning in time. I describe an implementation and evaluation of this framework on a real physical system. Through this demonstration, I show that the developed approach leads to automatically derived adaptive robot behavior. I show that the developed framework leads to improvements in quantitative metrics of safety and efficiency with the use of a simulated evaluation.
"Funded by the NASA Space Technology Research Fellowship Program and the National Science Foundation"--Page 6
by Przemyslaw A. Lasota.
Ph. D. in Autonomous Systems
Ph.D.inAutonomousSystems Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
DE, PACE FRANCESCO. "Natural and multimodal interfaces for human-machine and human-robot interaction." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2918004.
Full textAshcraft, C. Chace. "Moderating Influence as a Design Principle for Human-Swarm Interaction." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7406.
Full text