Dissertationen zum Thema „Robot-Robot interaction“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Robot-Robot interaction" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Akan, Batu. „Human Robot Interaction Solutions for Intuitive Industrial Robot Programming“. Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14315.
Der volle Inhalt der Quellerobot colleague project
Ali, Muhammad. „Contribution to decisional human-robot interaction: towards collaborative robot companions“. Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00719684.
Der volle Inhalt der QuelleAli, Muhammad. „Contributions to decisional human-robot interaction : towards collaborative robot companions“. Thesis, Toulouse, INSA, 2012. http://www.theses.fr/2012ISAT0003/document.
Der volle Inhalt der QuelleHuman Robot Interaction is entering into the interesting phase where the relationship with a robot is envisioned more as one of companionship with the human partner than a mere master-slave relationship. For this to become a reality, the robot needs to understand human behavior and not only react appropriately but also be socially proactive. A Companion Robot will also need to collaborate with the human in his daily life and will require a reasoning mechanism to manage thecollaboration and also handle the uncertainty in the human intention to engage and collaborate. In this work, we will identify key elements of such interaction in the context of a collaborative activity, with special focus on how humans successfully collaborate to achieve a joint action. We will show application of these elements in a robotic system to enrich its social human robot interaction aspect of decision making. In this respect, we provide a contribution to managing robot high-level goals and proactive behavior and a description of a coactivity decision model for collaborative human robot task. Also, a HRI user study demonstrates the importance of timing a verbal communication in a proactive human robot joint action
Alili, Samir. „Interaction décisionnelle Homme-Robot : planification de tâche pour un robot interactif en environnement humain“. Phd thesis, Université Paul Sabatier - Toulouse III, 2011. http://tel.archives-ouvertes.fr/tel-01068811.
Der volle Inhalt der QuelleAlili, Samir. „Interaction décisionnelle homme-robot : planification de tâche pour un robot interactif en environnement humain“. Phd thesis, Toulouse 3, 2011. http://thesesups.ups-tlse.fr/2663/.
Der volle Inhalt der QuelleThis thesis addresses the problem of the shared decision between human and robot in the perspective of interactive problem solving that involved human and robot. The robot and human share common goals and must work together to identify how to realize (the capacity and the competence of each one are different). Issues to be addressed concerning this division of roles, sharing of authority in the execution of a task (taking initiative), to exhibit the knowledge such that both can play an optimal role in the resolution of common problems. We developed a task planner named HATP (Human Aware Task Planner). This planner is based on hierarchical task planning that is enriched with social rules. It can produce plans that are socially acceptable that means plans that make legible the actions and intentions of the robot. The planner also has the ability to plan for the robot and humans while ensuring optimality for each. We are also interested in a hybrid approach that mixes between task planning and geometrical planning. This approach allows the robot to have control over the sequence of actions that it produces, but also on how to achieve it. Thereby treat the human-robot interaction problem more cleverly, but also on several levels
Kruse, Thibault. „Planning for human robot interaction“. Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30059/document.
Der volle Inhalt der QuelleThe recent advances in robotics inspire visions of household and service robots making our lives easier and more comfortable. Such robots will be able to perform several object manipulation tasks required for household chores, autonomously or in cooperation with humans. In that role of human companion, the robot has to satisfy many additional requirements compared to well established fields of industrial robotics. The purpose of planning for robots is to achieve robot behavior that is goal-directed and establishes correct results. But in human-robot-interaction, robot behavior cannot merely be judged in terms of correct results, but must be agree-able to human stakeholders. This means that the robot behavior must suffice additional quality criteria. It must be safe, comfortable to human, and intuitively be understood. There are established practices to ensure safety and provide comfort by keeping sufficient distances between the robot and nearby persons. However providing behavior that is intuitively understood remains a challenge. This challenge greatly increases in cases of dynamic human-robot interactions, where the actions of the human in the future are unpredictable, and the robot needs to constantly adapt its plans to changes. This thesis provides novel approaches to improve the legibility of robot behavior in such dynamic situations. Key to that approach is not to merely consider the quality of a single plan, but the behavior of the robot as a result of replanning multiple times during an interaction. For navigation planning, this thesis introduces directional cost functions that avoid problems in conflict situations. For action planning, this thesis provides the approach of local replanning of transport actions based on navigational costs, to provide opportunistic behavior. Both measures help human observers understand the robot's beliefs and intentions during interactions and reduce confusion
Bodiroža, Saša. „Gestures in human-robot interaction“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17705.
Der volle Inhalt der QuelleGestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
Ahmed, Muhammad Rehan. „Compliance Control of Robot Manipulator for Safe Physical Human Robot Interaction“. Doctoral thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-13986.
Der volle Inhalt der QuelleToris, Russell C. „Bringing Human-Robot Interaction Studies Online via the Robot Management System“. Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/1058.
Der volle Inhalt der QuelleNitz, Pettersson Hannes, und Samuel Vikström. „VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.
Der volle Inhalt der QuelleClodic, Aurelie. „Supervision pour un robot interactif: action et interaction pour un robot autonome en environnement humain“. Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00196608.
Der volle Inhalt der QuelleClodic, Aurélie. „Supervision pour un robot interactif : action et interaction pour un robot autonome en environnement humain“. Toulouse 3, 2007. http://www.theses.fr/2007TOU30248.
Der volle Inhalt der QuelleHuman-robot collaborative task achievement requires specific task supervision and execution. In order to close the loop with their human partners robots must maintain an interaction stream in order to communicate their own intentions and beliefs and to monitor the activity of their human partner. In this work we introduce SHARY, a supervisor dedicated to collaborative task achievement in the human robot interaction context. The system deals for one part with task refinement and on the other part with communication needed in the human-robot interaction context. To this end, each task is defined at a communication level and at an execution level. This system has been developped on the robot Rackham for a tour-guide demonstration and has then be used on the robot Jido for a task of fetch and carry to demonstrate system genericity
Puehn, Christian G. „Development of a Low-Cost Social Robot for Personalized Human-Robot Interaction“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1427889195.
Der volle Inhalt der QuelleTopp, Elin Anna. „Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping“. Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.
Der volle Inhalt der QuelleSunardi, Mathias I. „Expressive Motion Synthesis for Robot Actors in Robot Theatre“. PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/720.
Der volle Inhalt der QuelleHuang, Chien-Ming. „Joint attention in human-robot interaction“. Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.
Der volle Inhalt der QuelleBremner, Paul. „Conversational gestures in human-robot interaction“. Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557106.
Der volle Inhalt der QuelleFiore, Michelangelo. „Decision Making in Human-Robot Interaction“. Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0049/document.
Der volle Inhalt der QuelleThere has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users
Alanenpää, Madelene. „Gaze detection in human-robot interaction“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.
Der volle Inhalt der QuelleAlmeida, Luís Miguel Martins. „Human-robot interaction for object transfer“. Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22374.
Der volle Inhalt der QuelleRobots come into physical contact with humans under a variety of circumstances to perform useful work. This thesis has the ambitious aim of contriving a solution that leads to a simple case of physical human-robot interaction, an object transfer task. Firstly, this work presents a review of the current research within the field of Human-Robot Interaction, where two approaches are distinguished, but simultaneously required: a pre-contact approximation and an interaction by contact. Further, to achieve the proposed objectives, this dissertation addresses a possible answer to three major problems: (1) The robot control to perform the inherent movements of the transfer assignment, (2) the human-robot pre interaction and (3) the interaction by contact. The capabilities of a 3D sensor and force/tactile sensors are explored in order to prepare the robot to handover an object and to control the robot gripper actions, correspondingly. The complete software development is supported by the Robot Operating System (ROS) framework. Finally, some experimental tests are conducted to validate the proposed solutions and to evaluate the system's performance. A possible transfer task is achieved, even if some refinements, improvements and extensions are required to improve the solution performance and range.
Os robôs entram em contacto físico com os humanos sob uma variedade de circunstâncias para realizar trabalho útil. Esta dissertação tem como objetivo o desenvolvimento de uma solução que permita um caso simples de interação física humano-robô, uma tarefa de transferência de objetos. Inicialmente, este trabalho apresenta uma revisão da pesquisa corrente na área da interação humano-robô, onde duas abordagens são distinguíveis, mas simultaneamente necessárias: uma aproximação pré-contacto e uma interação pós-contacto. Seguindo esta linha de pensamento, para atingir os objetivos propostos, esta dissertação procura dar resposta a três grandes problemas: (1) O controlo do robô para que este desempenhe os movimentos inerentes á tarefa de transferência, (2) a pré-interação humano-robô e (3) a interação por contacto. As capacidades de um sensor 3D e de sensores de força são exploradas com o objetivo de preparar o robô para a transferência e de controlar as ações da garra robótica, correspondentemente. O desenvolvimento de arquitetura software é suportado pela estrutura Robot Operating System (ROS). Finalmente, alguns testes experimentais são realizados para validar as soluções propostas e para avaliar o desempenho do sistema. Uma possível transferência de objetos é alcançada, mesmo que sejam necessários alguns refinamentos, melhorias e extensões para melhorar o desempenho e abrangência da solução.
Khambhaita, Harmish. „Human-aware space sharing and navigation for an interactive robot“. Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30399.
Der volle Inhalt der QuelleThe methods of robotic movement planning have grown at an accelerated pace in recent years. The emphasis has mainly been on making robots more efficient, safer and react faster to unpredictable situations. As a result we are witnessing more and more service robots introduced in our everyday lives, especially in public places such as museums, shopping malls and airports. While a mobile service robot moves in a human environment, it leaves an innate effect on people about its demeanor. We do not see them as mere machines but as social agents and expect them to behave humanly by following societal norms and rules. This has created new challenges and opened new research avenues for designing robot control algorithms that deliver human-acceptable, legible and proactive robot behaviors. This thesis proposes a optimization-based cooperative method for trajectoryplanning and navigation with in-built social constraints for keeping robot motions safe, human-aware and predictable. The robot trajectory is dynamically and continuously adjusted to satisfy these social constraints. To do so, we treat the robot trajectory as an elastic band (a mathematical construct representing the robot path as a series of poses and time-difference between those poses) which can be deformed (both in space and time) by the optimization process to respect given constraints. Moreover, we also predict plausible human trajectories in the same operating area by treating human paths also as elastic bands. This scheme allows us to optimize the robot trajectories not only for the current moment but for the entire interaction that happens when humans and robot cross each other's paths. We carried out a set of experiments with canonical human-robot interactive situations that happen in our everyday lives such as crossing a hallway, passing through a door and intersecting paths on wide open spaces. The proposed cooperative planning method compares favorably against other stat-of-the-art human-aware navigation planning schemes. We have augmented robot navigation behavior with synchronized and responsive movements of the robot head, making the robot look where it is going and occasionally diverting its gaze towards nearby people to acknowledge that robot will avoid any possible collision with them. At any given moment the robot weighs multiple criteria according to the social context and decides where it should turn its gaze. Through an online user study we have shown that such gazing mechanism effectively complements the navigation behavior and it improves legibility of the robot actions. Finally, we have integrated our navigation scheme with a broader supervision system which can jointly generate normative robot behaviors such as approaching a person and adapting the robot speed according to a group of people who the robot guides in airports or museums
Collins, E. C. „Towards robot-assisted therapy : identifying mechanisms of effect in human-biomimetic robot interaction“. Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/16680/.
Der volle Inhalt der QuelleKaupp, Tobias. „Probabilistic Human-Robot Information Fusion“. Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/2554.
Der volle Inhalt der QuelleKaupp, Tobias. „Probabilistic Human-Robot Information Fusion“. University of Sydney, 2008. http://hdl.handle.net/2123/2554.
Der volle Inhalt der QuelleThis thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
Jou, Yung-Tsan. „Human-Robot Interactive Control“. Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1082060744.
Der volle Inhalt der QuelleBurke, Michael Glen. „Fast upper body pose estimation for human-robot interaction“. Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/256305.
Der volle Inhalt der QuelleFonooni, Benjamin. „Cognitive Interactive Robot Learning“. Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-97422.
Der volle Inhalt der QuelleAtt bygga autonoma robotar som passar ett stort antal olika användardefinierade applikationer kräver ett språng från dagens specialiserade maskiner till mer flexibla lösningar. För att nå detta mål, bör man övergå från traditionella förprogrammerade robotar till robotar som själva kan lära sig nya färdigheter. Learning from Demonstration (LfD) och Imitation Learning (IL), där roboten lär sig genom att observera en människa eller en annan robot, är bland de mest populära inlärningsteknikerna. Att visa roboten hur den ska utföra en uppgift är ofta mer naturligt och intuitivt än att modifiera ett komplicerat styrprogram. Men att lära robotar nya färdigheter så att de kan reproducera dem under nya yttre förhållanden, på rätt tid och på ett lämpligt sätt, kräver god förståelse för alla utmaningar inom området. Studier av LfD och IL hos människor och djur visar att flera kognitiva förmågor är inblandade för att lära sig nya färdigheter på rätt sätt. De mest anmärkningsvärda är förmågan att rikta uppmärksamheten på de relevanta aspekterna i en demonstration, och förmågan att anpassa observerade rörelser till robotens egen kropp. Dessutom är det viktigt att ha en klar förståelse av lärarens avsikter, och att ha förmågan att kunna generalisera dem till nya situationer. När en inlärningsfas är slutförd kan stimuli trigga det kognitiva systemet att utföra de nya färdigheter som blivit en del av robotens repertoar. Målet med denna avhandling är att utveckla metoder för LfD som huvudsakligen fokuserar på att förstå lärarens intentioner, och vilka delar av en demonstration som ska ha robotens uppmärksamhet. Den föreslagna arkitekturen innehåller de kognitiva funktioner som behövs för lärande och återgivning av högnivåaspekter av demonstrationer. Flera inlärningsmetoder för att rikta robotens uppmärksamhet och identifiera relevant information föreslås. Arkitekturen integrerar motorkommandon med begrepp, föremål och omgivningens tillstånd för att säkerställa korrekt återgivning av beteenden. Ett annat huvudresultat i denna avhandling rör metoder för att lösa tvetydigheter i demonstrationer, där lärarens intentioner inte är klart uttryckta och flera demonstrationer är nödvändiga för att kunna förutsäga intentioner på ett korrekt sätt. De utvecklade lösningarna är inspirerade av modeller av människors minne, och en primingmekanism används för att ge roboten ledtrådar som kan öka sannolikheten för att intentioner förutsägs på ett korrekt sätt. De utvecklade teknikerna har, i tillägg till robotinlärning, använts i ett halvautomatiskt system (shared control) baserat på visuellt guidade beteenden och primingmekanismer. Arkitekturen och inlärningsteknikerna tillämpas och utvärderas i flera verkliga scenarion som kräver en tydlig förståelse av mänskliga intentioner i demonstrationerna. Slutligen jämförs de utvecklade inlärningsmetoderna, och deras applicerbarhet under olika förhållanden diskuteras.
INTRO
McBean, John M. (John Michael) 1979. „Design and control of a voice coil actuated robot arm for human-robot interaction“. Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17951.
Der volle Inhalt der QuelleIncludes bibliographical references (leaf 68).
The growing field of human-robot interaction (HRI) demands robots that move fluidly, gracefully, compliantly and safely. This thesis describes recent work in the design and evaluation of long-travel voice coil actuators (VCAs) for use in robots intended for interacting with people. The basic advantages and shortcomings of electromagnetic actuators are discussed and evaluated in the context of human-robot interaction, and are compared to alternative actuation technologies. Voice coil actuators have been chosen for their controllability, ease of implementation, geometry, compliance, biomimetic actuation characteristics, safety, quietness, and high power density. Several VCAs were designed, constructed, and tested, and a 4 Degree of Freedom (DOF) robotic arm was built as a test platform for the actuators themselves, and the control systems used to drive them. Several control systems were developed and implemented that, when used with the actuators, enable smooth, fast, life-like motion.
by John M. McBean.
S.M.
Kuo, I.-Han. „Designing Human-Robot Interaction for service applications“. Thesis, University of Auckland, 2012. http://hdl.handle.net/2292/19438.
Der volle Inhalt der QuellePonsler, Brett. „Recognizing Engagement Behaviors in Human-Robot Interaction“. Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/109.
Der volle Inhalt der QuelleHolroyd, Aaron. „Generating Engagement Behaviors in Human-Robot Interaction“. Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/328.
Der volle Inhalt der QuelleMarín, Urías Luis Felipe. „Reasoning about space for human-robot interaction“. Toulouse 3, 2009. http://thesesups.ups-tlse.fr/1195/.
Der volle Inhalt der QuelleHuman Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective". In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings. In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks: - Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction. - A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
Valibeik, Salman. „Human robot interaction in a crowded environment“. Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/5677.
Der volle Inhalt der QuelleBussy, Antoine. „Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde“. Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20090/document.
Der volle Inhalt der QuelleRobots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions
CAPELLI, BEATRICE. „Controllo di Sistemi Multi-Robot e Interazione Uomo-Multi-Robot“. Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2022. http://hdl.handle.net/11380/1270798.
Der volle Inhalt der QuelleMulti-robot systems are rapidly improving and thanks to technological innovations they will become part of our everyday life. This near future will need new control strategies for these systems to enable them to efficiently complete their tasks, and to interact with human operators. In this thesis we proposed new control strategies to take multi-robot systems one step forward towards their applications in real world and their common usage. First of all, we introduced a novel strategy for connectivity maintenance that can efficiently mediate between the task of the system and the need of communication among the agents of the system. In fact, most of the time a multi-robot system needs communication among its agents in order to work properly, hence a proper controller should consider this aspect. The proposed methodology is flexible, both in terms of type of tasks that can be achieved and of communication network, which can change during the task, always ensuring connectivity. One of the most useful features of multi-robot systems is their intrinsic distributed nature, which enables them with the ability of simultaneously monitor different areas, or measure different quantities of interest. This ability is usually used in coverage tasks, where the robots are deployed in a large area that could not be monitored, or measured, efficiently by a single robot or by human operators. Coverage is a well-known and well studied problem for multi-robot systems, but it is usually studied under mild assumptions. Hence, we proposed a distributed coverage control law that considers the limited sensing range of robots and does not rely on a communication network. Another challenging aspect of deploying multi-robot systems in real world is to provide them stable control laws. In this aspect, we introduced a novel methodology that enables a multi-robot systems to achieve its task in a stable manner, minimally changing the primary task. This method allows us to implement control law that may lead to instability in a safe manner, and eventually to deal with delays and interaction with unknown environments. Finally, the topic of human-multi-robot interaction is faced. In fact, in order to deploy a multitude of robots along with operators, we need to investigate how robots can exchange information with humans. We focused on the ability of a multi-robot system to communicate its intention to a human operator. This ability is defined as legibility. We studied if these systems can communicate in an implicit manner their objective, both in terms of spatial goal and coordination objective. Furthermore, we characterized how the different characteristic of control laws can affect legibility.
Paléologue, Victor. „Teaching Robots Behaviors Using Spoken Language in Rich and Open Scenarios“. Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS458.
Der volle Inhalt der QuelleSocial robots like Pepper are already found "in the wild". Their behaviors must be adapted for each use case by experts. Enabling the general public to teach new behaviors to robots may lead to better adaptation at lesser cost. In this thesis, we study a cognitive system and a set of robotic behaviors allowing home users of Pepper robots to teach new behaviors as a composition of existing behaviors, using solely the spoken language. Homes are open worlds and are unpredictable. In open scenarios, a home social robot should learn about its environment. The purpose of such a robot is not restricted to learning new behaviors or about the environment: it should provide entertainment or utility, and therefore support rich scenarios. We demonstrate the teaching of behaviors in these unique conditions: the teaching is achieved by the spoken language on Pepper robots deployed in homes, with no extra device and using its standard system, in a rich and open scenario. Using automatic speech transcription and natural language processing, our system recognizes unpredicted teachings of new behaviors, and a explicit requests to perform them. The new behaviors may invoke existing behaviors parametrized with objects learned in other contexts, and may be defined as parametric. Through experiments of growing complexity, we show conflicts between behaviors in rich scenarios, and propose a solution based on symbolic task planning and priorization rules to resolve them. The results rely on qualitative and quantitative analysis and highlight the limitations of our solution, but also the new applications it enables
Walters, Michael L. „The design space for robot appearance and behaviour for social robot companions“. Thesis, University of Hertfordshire, 2008. http://hdl.handle.net/2299/1806.
Der volle Inhalt der QuelleBeer, Jenay M. „Understanding older adults' perceptions of usefulness of an assistive home robot“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50404.
Der volle Inhalt der QuelleMontreuil, Vincent. „Interaction décisionnelle homme-robot : la planification de tâches au service de la sociabilité du robot“. Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00401050.
Der volle Inhalt der QuelleJin, Emelie, und Ella Johnston. „Question generation for language café interaction between robot and human : NLP applied in a robot“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259555.
Der volle Inhalt der QuelleKonversationsrobotar kan användas i många olika miljöer, där en av dem är språkcaféer. Denna miljö ställer krav på roboten att den kan föra ett samtal kring många olika ämnen och även byta ämne smidigt. För att kunna göra detta behöver man generera många frågor som handlar om många olika ämnen och även hitta ett sätt att byta från ett ämnne till ett annat. Detta arbete ämnar göra detta genom att använda ett mall-ramverk för att generera frågor och klustring för att navigera mellan dem, en skalbar lösning som är anpassad för språkcafémiljöer. Det generella värdet av språkcaféer och deras roll i en språkinlärningsprocess diskuteras också.
Devin, Sandra. „Decisional issues during human-robot joint action“. Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19921/1/DEVIN_Sandra.pdf.
Der volle Inhalt der QuelleAmeri, Ekhtiarabadi Afshin. „Unified Incremental Multimodal Interface for Human-Robot Interaction“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.
Der volle Inhalt der QuelleRobot Colleague
Benkaouar, johal Wafa. „Companion Robots Behaving with Style : Towards Plasticity in Social Human-Robot Interaction“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM082/document.
Der volle Inhalt der QuelleCompanion robots are technologically and functionally more and more efficient. Capacities and usefulness of companion robots is nowadays a reality. These robots that have now more efficient are however not accepted yet in home environments as worth of having such robot and companionship hasn't been establish. Classically, social robots were displaying generic social behaviours and not taking into account inter-individual differences. More and more work in Human-Robot Interaction goes towards personalisation of the companion. Personalisation and control of the companion could lead to better understanding of the robot's behaviour. Proposing several ways of expression for companion robots playing role would allow user to customize their companion to their social preferences.In this work, we propose a plasticity framework for Human-Robot Interaction. We used a Scenario-Based Design method to elicit social roles for companion robots. Then, based on the literature in several disciplines, we propose to depict variations of behaviour of the companion robot with behavioural styles. Behavioural styles are defined according to the social role with non-verbal expressive parameters. The expressive parameters (static, dynamic and decorators) allow to transform neutral motions into styled motion. We conducted a perceptual study through a video-based survey showing two robots displaying styles allowing us to evaluate the expressibility of two parenting behavioural styles by two kind robots. We found that, participants were indeed able to discriminate between the styles in term of dominance and authoritativeness, which is in line with the psychological theory on these styles. Most important, we found that styles preferred by parents for their children was not correlated to their own parental practice. Consequently, behavioural styles are relevant cues for social personalisation of the companion robot by parents.A second experimental study in a natural environment involving child-robot interaction with 16 children showed that parents and children were expected a versatile robot able to play several social role. This study also showed that behavioural styles had an influence on the child's bodily attitudes during the interaction. Common dimension studied in non-verbal communication allowed us to develop measures for child-robot interaction, based on data captured with a Kinect2 sensor .In this thesis, we also propose a modularisation of a previously proposed affective and cognitive architecture resulting in the new Cognitive, Affective Interaction Oriented (CAIO) architecture. This architecture has been implemented in ROS framework allowing it to use it on social robots. We also proposed instantiations of the Stimulus Evaluation Checks of [Scherer, 2009]for two robotic platforms allowing dynamic expression of emotions.Both behavioural style framework and CAIO architecture can be useful in socialise companion robots and improving their acceptability
Najar, Anis. „Shaping robot behaviour with unlabeled human instructions“. Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066152.
Der volle Inhalt der QuelleMost of current interactive learning systems rely on predefined protocols that constrain the interaction with the user. Relaxing the constraints of interaction protocols can therefore improve the usability of these systems.This thesis tackles the question of interpreting human instructions, in order to relax the constraints about predetermining their meanings. We propose a framework that enables a human teacher to shape a robot behaviour, by interactively providing it with unlabeled instructions. Our approach consists in grounding the meaning of instruction signals in the task learning process, and using them simultaneously for guiding the latter. This approach has a two-fold advantage. First, it provides more freedom to the teacher in choosing his preferred signals. Second, it reduces the required engineering efforts, by removing the necessity to encode the meaning of each instruction signal. We implement our framework as a modular architecture, named TICS, that offers the possibility to combine different information sources: a predefined reward function, evaluative feedback and unlabeled instructions. This allows for more flexibility in the teaching process, by enabling the teacher to switch between different learning modes. Particularly, we propose several methods for interpreting instructions, and a new method for combining evaluative feedback with a predefined reward function. We evaluate our framework through a series of experiments, performed both in simulation and with real robots. The experimental results demonstrate the effectiveness of our framework in accelerating the task learning process, and in reducing the number of required interactions with the teacher
Lirussi, Igor. „Human-Robot interaction with low computational-power humanoids“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19120/.
Der volle Inhalt der QuelleMiners, William Ben. „Toward Understanding Human Expression in Human-Robot Interaction“. Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.
Der volle Inhalt der QuelleAn intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.
Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.
This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.
The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
Morvan, Jérémy. „Understanding and communicating intentions in human-robot interaction“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166445.
Der volle Inhalt der QuelleBusch, Baptiste. „Optimization techniques for an ergonomic human-robot interaction“. Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0027/document.
Der volle Inhalt der QuelleHuman-Robot Interaction (HRI) is a growing field in the robotic community. By its very nature it brings together researchers from various domains including psychology, sociology and obviously robotics who are shaping and designing the robots people will interact with ona daily basis. As human and robots starts working in a shared environment, the diversity of tasks theycan accomplish together is rapidly increasing. This creates challenges and raises concerns tobe addressed in terms of safety and acceptance of the robotic systems. Human beings havespecific needs and expectations that have to be taken into account when designing robotic interactions. In a sense, there is a strong need for a truly ergonomic human-robot interaction.In this thesis, we propose methods to include ergonomics and human factors in the motions and decisions planning algorithms, to automatize this process of generating an ergonomicinteraction. The solutions we propose make use of cost functions that encapsulate the humanneeds and enable the optimization of the robot’s motions and choices of actions. We haveapplied our method to two common problems of human-robot interaction.First, we propose a method to increase the legibility of the robot motions to achieve abetter understanding of its intentions. Our approach does not require modeling the conceptof legible motions but penalizes the trajectories that leads to late or mispredictions of therobot’s intentions during a live execution of a shared task. In several user studies we achievesubstantial gains in terms of prediction time and reduced interpretation errors.Second, we tackle the problem of choosing actions and planning motions that maximize thephysical ergonomics on the human side. Using a well-accepted ergonomic evaluation functionof human postures, we simulate the actions and motions of both the human and the robot,to accomplish a specific task, while avoiding situations where the human could be at risk interms of working posture. The conducted user studies show that our method leads to saferworking postures and a better perceived interaction
Palathingal, Xavier P. „A framework for long-term human-robot interaction /“. abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1446798.
Der volle Inhalt der Quelle"May, 2007." Includes bibliographical references (leaves 44-46). Online version available on the World Wide Web. Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2007]. 1 microfilm reel ; 35 mm.
Kapellmann-Zafra, Gabriel. „Human-swarm robot interaction with different awareness constraints“. Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19396/.
Der volle Inhalt der Quelle