Dissertations / Theses on the topic 'Human robotics interaction spatial'

To see the other types of publications on this topic, follow the link: Human robotics interaction spatial.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Human robotics interaction spatial.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dondrup, Christian. "Human-robot spatial interaction using probabilistic qualitative representations." Thesis, University of Lincoln, 2016. http://eprints.lincoln.ac.uk/28665/.

Full text
Abstract:
Current human-aware navigation approaches use a predominantly metric representation of the interaction which makes them susceptible to changes in the environment. In order to accomplish reliable navigation in ever-changing human populated environments, the presented work aims to abstract from the underlying metric representation by using Qualitative Spatial Relations (QSR), namely the Qualitative Trajectory Calculus (QTC), for Human-Robot Spatial Interaction (HRSI). So far, this form of representing HRSI has been used to analyse different types of interactions online. This work extends this representation to be able to classify the interaction type online using incrementally updated QTC state chains, create a belief about the state of the world, and transform this high-level descriptor into low-level movement commands. By using QSRs the system becomes invariant to change in the environment, which is essential for any form of long-term deployment of a robot, but most importantly also allows the transfer of knowledge between similar encounters in different environments to facilitate interaction learning. To create a robust qualitative representation of the interaction, the essence of the movement of the human in relation to the robot and vice-versa is encoded in two new variants of QTC especially designed for HRSI and evaluated in several user studies. To enable interaction learning and facilitate reasoning, they are employed in a probabilistic framework using Hidden Markov Models (HMMs) for online classiffication and evaluation of their appropriateness for the task of human-aware navigation. In order to create a system for an autonomous robot, a perception pipeline for the detection and tracking of humans in the vicinity of the robot is described which serves as an enabling technology to create incrementally updated QTC state chains in real-time using the robot's sensors. Using this framework, the abstraction and generalisability of the QTC based framework is tested by using data from a different study for the classiffication of automatically generated state chains which shows the benefits of using such a highlevel description language. The detriment of using qualitative states to encode interaction is the severe loss of information that would be necessary to generate behaviour from it. To overcome this issue, so-called Velocity Costmaps are introduced which restrict the sampling space of a reactive local planner to only allow the generation of trajectories that correspond to the desired QTC state. This results in a exible and agile behaviour I generation that is able to produce inherently safe paths. In order to classify the current interaction type online and predict the current state for action selection, the HMMs are evolved into a particle filter especially designed to work with QSRs of any kind. This online belief generation is the basis for a exible action selection process that is based on data acquired using Learning from Demonstration (LfD) to encode human judgement into the used model. Thereby, the generated behaviour is not only sociable but also legible and ensures a high experienced comfort as shown in the experiments conducted. LfD itself is a rather underused approach when it comes to human-aware navigation but is facilitated by the qualitative model and allows exploitation of expert knowledge for model generation. Hence, the presented work bridges the gap between the speed and exibility of a sampling based reactive approach by using the particle filter and fast action selection, and the legibility of deliberative planners by using high-level information based on expert knowledge about the unfolding of an interaction.
APA, Harvard, Vancouver, ISO, and other styles
2

ERMACORA, GABRIELE. "Advances in Human Robot Interaction for Cloud Robotics applications." Doctoral thesis, Politecnico di Torino, 2016. http://hdl.handle.net/11583/2643059.

Full text
Abstract:
In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Reality
APA, Harvard, Vancouver, ISO, and other styles
3

Holthaus, Patrick [Verfasser]. "Approaching human-like spatial awareness in social robotics: an investigation of spatial interaction strategies with a receptionist robot / Patrick Holthaus." Bielefeld : Universitätsbibliothek Bielefeld, 2014. http://d-nb.info/1070981389/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Blisard, Samuel N. "Modeling spatial references for unoccupied spaces for human-robot interaction /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p1426048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dobnik, Simon. "Teaching mobile robots to use spatial words." Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:d3e8d606-212b-4a8e-ba9b-9c59cfd3f485.

Full text
Abstract:
The meaning of spatial words can only be evaluated by establishing a reference to the properties of the environment in which the word is used. For example, in order to evaluate what is to the left of something or how fast is fast in a given context, we need to evaluate properties such as the position of objects in the scene, their typical function and behaviour, the size of the scene and the perspective from which the scene is viewed. Rather than encoding the semantic rules that define spatial expressions by hand, we developed a system where such rules are learned from descriptions produced by human commentators and information that a mobile robot has about itself and its environment. We concentrate on two scenarios and words that are used in them. In the first scenario, the robot is moving in an enclosed space and the descriptions refer to its motion ('You're going forward slowly' and 'Now you're turning right'). In the second scenario, the robot is static in an enclosed space which contains real-size objects such as desks, chairs and walls. Here we are primarily interested in prepositional phrases that describe relationships between objects ('The chair is to the left of you' and 'The table is further away than the chair'). The perspective can be varied by changing the location of the robot. Following the learning stage, which is performed offline, the system is able to use this domain specific knowledge to generate new descriptions in new environments or to 'understand' these expressions by providing feedback to the user, either linguistically or by performing motion actions. If a robot can be taught to 'understand' and use such expressions in a manner that would seem natural to a human observer, then we can be reasonably sure that we have captured at least something important about their semantics. Two kinds of evaluation were performed. First, the performance of machine learning classifiers was evaluated on independent test sets using 10-fold cross-validation. A comparison of classifier performance (in regard to their accuracy, the Kappa coefficient (κ), ROC and Precision-Recall graphs) is made between (a) the machine learning algorithms used to build them, (b) conditions under which the learning datasets were created and (c) the method by which data was structured into examples or instances for learning. Second, with some additional knowledge required to build a simple dialogue interface, the classifiers were tested live against human evaluators in a new environment. The results show that the system is able to learn semantics of spatial expressions from low level robotic data. For example, a group of human evaluators judged that the live system generated a correct description of motion in 93.47% of cases (the figure is averaged over four categories) and that it generated the correct description of object relation in 59.28% of cases.
APA, Harvard, Vancouver, ISO, and other styles
6

Chadalavada, Ravi Teja. "Human Robot Interaction for Autonomous Systems in Industrial Environments." Thesis, Chalmers University of Technology, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-55277.

Full text
Abstract:
The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional Automatic Guided Vehicles (AGV), which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. This work addresses the problem of providing information regarding a service robot’s intention to humans co-populating the environment. The overall goal is to make humans feel safer and more comfortable, even when they are in close vicinity of the robot. A spatial Augmented Reality (AR) system for robot intention communication by means of projecting proxemic information onto shared floor space is developed on a robotic fork-lift by equipping it with a LED projector. This helps in visualizing internal state information and intents on the shared floors spaces. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. A Likert scalebased evaluation which also includes comparisons to human-human intention communication was performed. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot. This kind of synergistic human-robot interaction in a work environment is expected to increase the robot’s acceptability in the industry.
APA, Harvard, Vancouver, ISO, and other styles
7

Marin-Urias, Luis Felipe. "Planification et contrôle de mouvements en interaction avec l'homme. Reasoning about space for human-robot interaction." Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00468918.

Full text
Abstract:
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière expo-nentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain". Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes. Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux: - La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer differentes tâches d'interaction. - Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives.
APA, Harvard, Vancouver, ISO, and other styles
8

Sloan, Jared. "THE EFFECTS OF VIDEO FRAME DELAY AND SPATIAL ABILITY ON THE OPERATION OF MULTIPLE SEMIAUTONOMOUS AND TELE-OPERATED ROBOTS." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3734.

Full text
Abstract:
The United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation's wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for net-centric warfare is limited. The implication is that the FCS must be designed to leverage bandwidth in a manner that does not overtax computational resources. In this study alternatives for improving human performance during operation of teleoperated and semi-autonomous robots were examined. It was predicted that when operating both types of robots, frame delay of the semi-autonomous robot would improve performance because it would allow operators to concentrate on the constant workload imposed by the teleoperated while only allocating resources to the semi-autonomous during critical tasks. An additional prediction was that operators with high spatial ability would perform better than those with low spatial ability, especially when operating an aerial vehicle. The results can not confirm that frame delay has a positive effect on operator performance, though power may have been an issue, but clearly show that spatial ability is a strong predictor of performance on robotic asset control, particularly with aerial vehicles. In operating the UAV, the high spatial group was, on average, 30% faster, lazed 12% more targets, and made 43% more location reports than the low spatial group. The implications of this study indicate that system design should judiciously manage workload and capitalize on individual ability to improve performance and are relevant to system designers, especially in the military community.
M.S.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering and Management Systems
APA, Harvard, Vancouver, ISO, and other styles
9

Bitonneau, David. "Conception de systèmes cobotiques industriels : approche robotique avec prise en compte des facteurs humains : application à l'industrie manufacturière au sein de Safran et ArianeGroup." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0069/document.

Full text
Abstract:
La cobotique est un domaine émergeant qui offre de nouvelles perspectives pour améliorer la performance des entreprises et la santé des hommes au travail, en alliant l'expertise et les capacités cognitives des opérateurs aux atouts des robots. Dans cette thèse la cobotique est positionnée comme le domaine de la collaboration homme-robot. Nous définissons les systèmes cobotiques comme des systèmes au sein desquels l'homme et le robot interagissent pour réaliser une tâche commune.Cette thèse d'ingénierie robotique a été réalisée en binôme avec Théo Moulières-Seban, doctorant en cognitique. Ces deux thèses Cifre ont été menées avec Safran et ArianeGroup qui ont reconnu la cobotique comme stratégique pour le développement de leur compétitivité. Pour étudier et développer les systèmes cobotiques, nous avons proposé conjointement une approche méthodologique interdisciplinaire appliquée à l'industrie et validée par nos encadrants académiques. Cette approche offre une place centrale à l'intégration des futurs utilisateurs dans la conception, à travers l'analyse de leur activité de travail et la réalisation de simulations participatives. Nous avons déployé cette démarche pour répondre à différents besoins industriels concrets chez ArianeGroup.Dans cette thèse, nous détaillons la conception d'un système cobotique pour améliorer la santé et la sécurité des opérateurs sur le poste de nettoyage des cuves de propergol. Les opérations réalisées sur ce poste sont difficiles physiquement et présentent un risque pyrotechnique. Conjointement avec l'équipe projet ArianeGroup, nous avons proposé un système cobotique de type téléopération pour conserver l'expertise des opérateurs tout en les plaçant en sécurité pendant la réalisation des opérations pyrotechniques. Cette solution est en cours d'industrialisation dans la perspective de la production du propergol des fusées Ariane.L'application de notre démarche d'ingénierie des systèmes cobotiques sur une variété de postes de travail et de besoins industriels nous a permis de l'enrichir avec des outils opérationnels pour guider la conception. Nous prévoyons que la cobotique soit une des clés pour replacer l'homme au cœur des moyens de production dans le cadre de l'Usine du futur. Réciproquement, l'intégration des opérateurs dans les projets de conception sera déterminante pour assurer la performance et l'acceptation des futurs systèmes cobotiques
Human Robot Collaboration provides new perspectives to improve companies' performance and operators' working conditions, by bringing together workers expertise and adaptation capacity with robots' power and precision. In this research, we introduce the concept of "cobotic system", in which humans and robots -- with possibly different roles -- interact, sharing a common purpose of solving a task.This robotic engineering PhD thesis has been completed as a team with the cognitive engineer Théo Moulières-Seban. Both PhD thesis were conducted under the leadership of Safran and ArianeGroup, which have recognized Human Robot Collaboration has strategic for their industrial performance. Together, we proposed the "cobotic system engineering": a cross-disciplinary approach for cobotic system design. This approach was applied to several industrial needs within ArianeGroup.In this thesis, we detail the design of a cobotic system to improve operators' health and safety on the "tank cleaning" workstation. We have proposed a teleoperation cobotic system to keep operators' expertise while placing them in a safe place to conduct operations. This solution is now under an industrialization phase for the production of Ariane launch vehicles.We argue that thanks to their flexibility, their connectivity to modern workshops' technological ecosystem and their ability to take humans into account, cobotic systems will be one of the key parts composing the Industry 4.0
APA, Harvard, Vancouver, ISO, and other styles
10

Sanan, Siddharth. "Soft Inflatable Robots for Safe Physical Human Interaction." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/303.

Full text
Abstract:
Robots that can operate in human environments in a safe and robust manner would be of great benefit to society, due to their immense potential for providing assistance to humans. However, robots have seen limited application outside of the industrial setting in environments such as homes and hospitals. We believe a very important factor preventing the cross over of robotic technology from the factory to the house is the issue of safety. The safety issue is usually bypassed in the industrial setting by separation of human and robot workspaces. Such a solution is clearly infeasible for robots that provide assistance to humans. This thesis aims to develop intrinsically safe robots that are suitable for providing assistance to humans. We believe intrinsic safety is important in physical human robot interaction because unintended interactions will occur between humans and robots due to: (a) sharing of workspace, (b) hardware failure (computer crashes, actuator failures), (c) limitations on perception, and (d) limitations on cognition. When such unintended interactions are very fast (collisions), they are beyond the bandwidth limits of practical controllers and only the intrinsic safety characteristics of the system govern the interaction forces that occur. The effects of such interactions with traditional robots could range from persistent discomfort to bone fracture to even serious injuries. Therefore robots that serve in the application domain of human assistance should be able to function with a high tolerance for unintended interactions. This calls for a new design paradigm where operational safety is the primary concern and task accuracy/precision though important are secondary. In this thesis, we address this new design paradigm by developing robots that have a soft inflatable structure, i.e, inflatable robots. Inflatable robots can improve intrinsic safety characteristics by being extremely lightweight and by including surface compliance (due to the compressibility of air) as well as distributed structural compliance (due to the lower Young’s modulus of the materials used) in the structure. This results in a lower effective inertia during collisions which implies a lower impact force between the inflatable robot and human. Inflatable robots can essentially be manufactured like clothes and can therefore also potentially lower the cost of robots to an extent where personal robots can be an affordable reality. In this thesis, we present a number of inflatable robot prototypes to address challenges in the area of design and control of such systems. Specific areas addressed are: structural and joint design, payload capacity, pneumatic actuation, state estimation and control. The CMU inflatable arm is used in tasks like wiping and feeding a human to successfully demonstrate the use of inflatable robots for tasks involving close physical human interaction.
APA, Harvard, Vancouver, ISO, and other styles
11

Huang, Chien-Ming. "Joint attention in human-robot interaction." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.

Full text
Abstract:
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
APA, Harvard, Vancouver, ISO, and other styles
12

Wåhlin, Peter. "Enhanching the Human-Team Awareness of a Robot." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16371.

Full text
Abstract:
The use of autonomous robots in our society is increasing every day and a robot is no longer seen as a tool but as a team member. The robots are now working side by side with us and provide assistance during dangerous operations where humans otherwise are at risk. This development has in turn increased the need of robots with more human-awareness. Therefore, this master thesis aims at contributing to the enhancement of human-aware robotics. Specifically, we are investigating the possibilities of equipping autonomous robots with the capability of assessing and detecting activities in human teams. This capability could, for instance, be used in the robot's reasoning and planning components to create better plans that ultimately would result in improved human-robot teamwork performance. we propose to improve existing teamwork activity recognizers by adding intangible features, such as stress, motivation and focus, originating from human behavior models. Hidden markov models have earlier been proven very efficient for activity recognition and have therefore been utilized in this work as a method for classification of behaviors. In order for a robot to provide effective assistance to a human team it must not only consider spatio-temporal parameters for team members but also the psychological.To assess psychological parameters this master thesis suggests to use the body signals of team members. Body signals such as heart rate and skin conductance. Combined with the body signals we investigate the possibility of using System Dynamics models to interpret the current psychological states of the human team members, thus enhancing the human-awareness of a robot.
Användningen av autonoma robotar i vårt samhälle ökar varje dag och en robot ses inte längre som ett verktyg utan som en gruppmedlem. Robotarna arbetar nu sida vid sida med oss och ger oss stöd under farliga arbeten där människor annars är utsatta för risker. Denna utveckling har i sin tur ökat behovet av robotar med mer människo-medvetenhet. Därför är målet med detta examensarbete att bidra till en stärkt människo-medvetenhet hos robotar. Specifikt undersöker vi möjligheterna att utrusta autonoma robotar med förmågan att bedöma och upptäcka olika beteenden hos mänskliga lag. Denna förmåga skulle till exempel kunna användas i robotens resonemang och planering för att ta beslut och i sin tur förbättra samarbetet mellan människa och robot. Vi föreslår att förbättra befintliga aktivitetsidentifierare genom att tillföra förmågan att tolka immateriella beteenden hos människan, såsom stress, motivation och fokus. Att kunna urskilja lagaktiviteter inom ett mänskligt lag är grundläggande för en robot som ska vara till stöd för laget. Dolda markovmodeller har tidigare visat sig vara mycket effektiva för just aktivitetsidentifiering och har därför använts i detta arbete. För att en robot ska kunna ha möjlighet att ge ett effektivt stöd till ett mänskligtlag måste den inte bara ta hänsyn till rumsliga parametrar hos lagmedlemmarna utan även de psykologiska. För att tyda psykologiska parametrar hos människor förespråkar denna masteravhandling utnyttjandet av mänskliga kroppssignaler. Signaler så som hjärtfrekvens och hudkonduktans. Kombinerat med kroppenssignalerar påvisar vi möjligheten att använda systemdynamiksmodeller för att tolka immateriella beteenden, vilket i sin tur kan stärka människo-medvetenheten hos en robot.

The thesis work was conducted in Stockholm, Kista at the department of Informatics and Aero System at Swedish Defence Research Agency.

APA, Harvard, Vancouver, ISO, and other styles
13

Martínez, Martínez David. "Learning relational models with human interaction for planning in robotics." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/458884.

Full text
Abstract:
Automated planning has proven to be useful to solve problems where an agent has to maximize a reward function by executing actions. As planners have been improved to salve more expressive and difficult problems, there is an increasing interest in using planning to improve efficiency in robotic tasks. However, planners rely on a domain model, which has to be either handcrafted or learned. Although learning domain models can be very costly, recent approaches provide generalization capabilities and integrate human feedback to reduce the amount of experiences required to learn. In this thesis we propase new methods that allow an agent with no previous knowledge to solve certain problems more efficiently by using task planning. First, we show how to apply probabilistic planning to improve robot performance in manipulation tasks (such as cleaning the dirt or clearing the tableware on a table). Planners obtain sequences of actions that get the best result in the long term, beating reactive strategies. Second, we introduce new reinforcement learning algorithms where the agent can actively request demonstrations from a teacher to learn new actions and speed up the learning process. In particular, we propase an algorithm that allows the user to set the mínimum quality to be achieved, where a better quality also implies that a larger number of demonstrations will be requested . Moreover, the learned model is analyzed to extract the unlearned or problematic parts of the model. This information allow the agent to provide guidance to the teacher when a demonstration is requested, and to avoid irrecoverable errors. Finally, a new domain model learner is introduced that, in addition to relational probabilistic action models, can also learn exogenous effects. This learner can be integrated with existing planners and reinforcement learning algorithms to salve a wide range of problems. In summary, we improve the use of learning and task planning to salve unknown tasks. The improvements allow an agent to obtain a larger benefit from planners, learn faster, balance the number of action executions and teacher demonstrations, avoid irrecoverable errors, interact with a teacher to solve difficult problems, and adapt to the behavior of other agents by learning their dynamics. All the proposed methods were compared with state-of-the-art approaches, and were also demonstrated in different scenarios, including challenging robotic tasks.
La planificación automática ha probado ser de gran utilidad para resolver problemas en los que un agente tiene que ejecutar acciones para maximizar una función de recompensa. A medida que los planificadores han sido capaces de resolver problemas cada vez más complejos, ha habido un creciente interés por utilizar dichos planificadores para mejorar la eficiencia de tareas robóticas. Sin embargo, los planificadores requieren un modelo del dominio, el cual puede ser creado a mano o aprendido. Aunque aprender modelos automáticamente puede ser costoso, recientemente han aparecido métodos que permiten la interacción persona-máquina y generalizan el conocimiento para reducir la cantidad de experiencias requeridas para aprender. En esta tesis proponemos nuevos métodos que permiten a un agente sin conocimiento previo de la tarea resolver problemas de forma más eficiente mediante el uso de planificación automática. Comenzaremos mostrando cómo aplicar planificación probabilística para mejorar la eficiencia de robots en tareas de manipulación (como limpiar suciedad o recoger una mesa). Los planificadores son capaces de obtener las secuencias de acciones que producen los mejores resultados a largo plazo, superando a las estrategias reactivas. Por otro lado, presentamos nuevos algoritmos de aprendizaje por refuerzo en los que el agente puede solicitar demostraciones a un profesor. Dichas demostraciones permiten al agente acelerar el aprendizaje o aprender nuevas acciones. En particular, proponemos un algoritmo que permite al usuario establecer la mínima suma de recompensas que es aceptable obtener, donde una recompensa más alta implica que se requerirán más demostraciones. Además, el modelo aprendido será analizado para identificar qué partes están incompletas o son problemáticas. Esta información permitirá al agente evitar errores irrecuperables y también guiar al profesor cuando se solicite una demostración. Finalmente, se ha introducido un nuevo método de aprendizaje para modelos de dominios que, además de obtener modelos relacionales de acciones probabilísticas, también puede aprender efectos exógenos. Mostraremos cómo integrar este método en algoritmos de aprendizaje por refuerzo para poder abordar una mayor cantidad de problemas. En resumen, hemos mejorado el uso de técnicas de aprendizaje y planificación para resolver tareas desconocidas a priori. Estas mejoras permiten a un agente aprovechar mejor los planificadores, aprender más rápido, elegir entre reducir el número de acciones ejecutadas o el número de demostraciones solicitadas, evitar errores irrecuperables, interactuar con un profesor para resolver problemas complejos, y adaptarse al comportamiento de otros agentes aprendiendo sus dinámicas. Todos los métodos propuestos han sido comparados con trabajos del estado del arte, y han sido evaluados en distintos escenarios, incluyendo tareas robóticas.
APA, Harvard, Vancouver, ISO, and other styles
14

Palathingal, Xavier P. "A framework for long-term human-robot interaction /." abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1446798.

Full text
Abstract:
Thesis (M.S.)--University of Nevada, Reno, 2007.
"May, 2007." Includes bibliographical references (leaves 44-46). Online version available on the World Wide Web. Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2007]. 1 microfilm reel ; 35 mm.
APA, Harvard, Vancouver, ISO, and other styles
15

BAZZANO, FEDERICA. "Human-Machine Interfaces for Service Robotics." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2734314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bethel, Cindy L. "Robots Without Faces: Non-Verbal Social Human-Robot Interaction." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Khan, Yousuf, and Edier Otalvaro. "Human-Robot Interaction Using Reinforcement Learning and Convolutional Neural Network." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-50918.

Full text
Abstract:
Proper interaction is a crucial aspect of team collaborations for successfully achieving a common goal. In recent times, more technically advanced robots have been introduced into the industrial environments sharing the same workspace as other robots and humans which causes the need for human-robot interaction (HRI) to be greater than ever before. The purpose of this study is to enable a HRI by teaching a robot to classify different human facial expressions as either positive or negative using a convolutional neural network and respond to each of them with the help of the reinforcement learning algorithm Q-learning.The simulation showed that the robot could accurately classify and react to the facial expressions under the instructions given by the Q-learning algorithm. The simulated results proved to be consistent in every conducted experiment having low variances. These results are promising for future research to allow for the study to be conducted in real-life environments.
APA, Harvard, Vancouver, ISO, and other styles
18

Toris, Russell C. "Bringing Human-Robot Interaction Studies Online via the Robot Management System." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/1058.

Full text
Abstract:
"Human-Robot Interaction (HRI) is a rapidly expanding field of study that focuses on allowing non-roboticist users to naturally and effectively interact with robots. The importance of conducting extensive user studies has become a fundamental component of HRI research; however, due to the nature of robotics research, such studies often become expensive, time consuming, and limited to constrained demographics. This work presents the Robot Management System, a novel framework for bringing robotic experiments to the web. A detailed description of the open source system, an outline of new security measures, and a use case study of the RMS as a means of conducting user studies is presented. Using a series of navigation and manipulation tasks with a PR2 robot, three user study conditions are compared: users that are co-present with the robot, users that are recruited to the university lab but control the robot from a different room, and remote web-based users. The findings show little statistical differences between usability patterns across these groups, further supporting the use of web-based crowdsourcing techniques for certain types of HRI evaluations."
APA, Harvard, Vancouver, ISO, and other styles
19

Joshi, Varun. "The Human Walking Controller: Derivation from Experiments and Applications to the Study of Human Structure Interaction." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1542978112280872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Morvan, Jérémy. "Understanding and communicating intentions in human-robot interaction." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166445.

Full text
Abstract:
This thesis is about the collaboration and interaction between a robot and a human agent. The goal is to use the robot as a coworker, by implementing the premises of an interaction system that would make the interaction as natural as possible. This involves that the robot has a vision system that allows understanding of the intentions of the human. This thesis work is intended to be part of a larger project aimed at extending the competences of the programmable industrial robot, Baxter, made by Rethink Robotics. Due to the limited vision abilities of this robot, a Kinect camera is added on the top of its head. This thesis covers human gestures recognition through the Kinect data and robot reactions to these gestures through visual feedback and actions.
APA, Harvard, Vancouver, ISO, and other styles
21

Forsslund, Jonas. "Reflective Spatial Haptic Interaction Design Approaching a Designerly Understanding of Spatial Haptics." Licentiate thesis, KTH, Medieteknik och interaktionsdesign, MID, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-128609.

Full text
Abstract:
With a spatial haptic interface device and a suitable haptic rendering algorithm, users can explore and modify virtual geometries in three dimensions with the aid of their haptic (touch) sense. Designers of surgery simulators, anatomy exploration tools and applications that involve assembly of complex objects should consider employing this technology. However, in order to know how the technology behaves as a design material, the designer needs to become well acquainted with its material properties. This presents a significant challenge today, since the haptic devices are presented as black boxes, and implementation of advanced rendering algorithms represent highly specialized and time consuming development activities. In addition, it is difficult to imagine what an interface will feel like until it has been fully implemented, and important design trade-offs such as the virtual object's size and stability gets neglected. Traditional user-centered design can be interpreted as that the purpose of the field study phase is to generate a set of specifications for an interface, and only solutions that cover these specifications will be considered in the design phase. The designer might miss opportunities to create solutions that uses e.g. lower cost devices since that might require reinterpretation of the overarching goal of the situation with starting point in the technical possibilities, which is unlikely without significant material knowledge. As an example, a surgery simulator designed in this thesis required a high cost haptic device to render adequate forces on the scale of human teeth, but if the design goal is reinterpreted as creating a tool for learning anatomical differences and surgical steps, an application more suitable for the lower cost haptic devices could be crafted. This solution is as much informed by the haptic material "speaking back to" the designer as by field studies. This licentiate thesis will approach a perspective of spatial haptic interface design that is grounded in contemporary design theory. These theories emphasizes the role of the designer, who is not seen as an objective actor but as someone who has a desire to transform a situation into a preferred one as a service to a client or greater society. It also emphasizes the need for crafting skills in order to innovate, i.e. make designed objects real. Further, it considers aesthetic aspects of a design, which includes the subtle differences in friction as you move the device handle, and overall attractiveness of the device and system. The thesis will cover a number of design cases which will be related to design theory and reflected upon. Particular focus will be placed on the most common class of haptic devices which can give force feedback in three dimensions and give input in six (position and orientation). Forces will be computed and objects deformed by an volume sampling algorithm which will be discussed. Important design properties such as stiffness, have been identified and exposed as a material for design. A tool for tuning these properties interactively has been developed to assist designers to become acquainted with the spatial haptic material and to craft the material for a particular user experience. Looking forward, the thesis suggests the future work of making spatial haptic interfaces more design ready, both in software and hardware. This is proposed to be accomplished through development of toolkits for innovation which encapsulate complexities and exposes design parameters. A particular focus will be placed on enabling crafting with the haptic material whose natural limitations should be seen as suggestions rather than hinders for creating valuable solutions.

QC 20130916

APA, Harvard, Vancouver, ISO, and other styles
22

Wood, David K. "Learning from Gross Motion Observations of Human-Machine Interaction." Thesis, The University of Sydney, 2011. https://hdl.handle.net/2123/29223.

Full text
Abstract:
This thesis discusses the problems inherent in the modelling and classification of human interactions with robots using gross motions observations. Contributions to this field are one approach by which robots can be made socially aware, at a low enough cost for the commercialisation of such systems to be viable. In general, it cheaper and simpler both in terms of sensing requirements and computational power to determine the position of a person participating in an interaction than to attempt to perform more advanced operations such as face detection and recognition, gaze tracking or gesture recognition. Being able to perform classification and modelling of human behaviour from gross motion observations is a useful ability for the designers of such HRI systems to have at their disposal. Two contributions are made to the problem of gross motion modelling and classifica— tion. The first is an approach to measuring error levels implicit to the models learned in a generative classification scenario. By comparing the results from these model— based error measures to the results obtained from more traditional data-based error measures an assessment can be made about how well the internal models within the classifier represent the true state of the world. A method is also presented to sum— marise these comparisons using the symmetric Kullback—Leibler divergence, enabling the rapid analysis of the large numbers of classifiers produced with the application of cross—validation techniques. The second contribution is a taxonomy of feature representations and a set of design rules derived from this taxonomy for the representation of human—robot interaction modelling features. These rules are focussed on gross motion features, but can be extended to cover almost any human-robot interaction modelling or classification task. These two contributions are then demonstrated on interaction data gathered from the Fish—Bird new media artwork. This is a challenging problem due to the interaction parameters being modelled, however the use of a rigorous design approach and the application of the divergence measures derived earlier in the thesis enable targeted analysis and useful conclusions to be drawn. Results are shown to demonstrate these applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Topp, Elin Anna. "Initial steps toward human augmented mapping." Licentiate thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Topp, Elin Anna. "Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping." Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wagner, Alan Richard. "The role of trust and relationships in human-robot social interaction." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31776.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010.
Committee Chair: Arkin, Ronald C.; Committee Member: Christensen, Henrik I.; Committee Member: Fisk, Arthur D.; Committee Member: Ram, Ashwin; Committee Member: Thomaz, Andrea. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
26

Vincent, Thomas. "Handheld augmented reality interaction : spatial relations." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM032/document.

Full text
Abstract:
Nous nous intéressons à l'interaction dans le contexte de la Réalité Augmentée sur dispositifs mobiles. Le dispositif mobile est utilisé comme une lentille magique qui 'augmente' la perception de l'environnement physique avec du contenu numérique. Nous nous sommes particulièrement intéressés aux relations spatiales entre le contenu affiché à l'écran et l'environnement physique. En effet la combinaison des environnements physique et numérique repose sur ces relations spatiales, comme l'adaptation de l'augmentation numérique en fonction de la localisation de l'utilisateur. Mais ces relations spatiales définissent aussi des contraintes pour l'interaction. Par exemple l'effet des tremblements naturels de la main rend instable la vidéo affichée sur l'écran du dispositif mobile et par conséquent a un impact direct sur la précision d'interaction. La question est alors, comment peut-on relâcher ces contraintes spatiales pour améliorer l'interaction sans pour autant casser la co-localisation physique-numérique. Nous apportons trois contributions. Tout d'abord, nous avons établi un espace de conception décrivant le contenu affiché à l'écran dans le contexte de la Réalité Augmentée sur dispositifs mobiles. Cet espace conceptuel met en exergue les relations spatiales entre les différents repères des composants le structurant. Cet espace permet d'étudier systématiquement la conception de techniques d'interaction dans le contexte de la Réalité Augmentée sur dispositifs mobiles. Deuxièmement, nous avons conçu des techniques de pointage améliorant la précision du pointage en Réalité Augmentée sur supports mobiles. Ces techniques de pointage ont été évaluées lors d'expériences utilisateur. Enfin, dans le cadre du projet en partenariat AIST-Tuskuba, Schneider France et Schneider Japon dans lequel s'inscrit cette thèse, nous avons développé une boîte à outils pour le développement d'applications de Réalité Augmentée sur dispositifs mobiles. Cette boîte à outils a été utilisée pour développer plusieurs démonstrateurs
We explored interaction within the context of handheld Augmented Reality (AR), where a handheld device is used as a physical magic lens to 'augment' the physical surrounding. We focused, in particular, on the role of spatial relations between the on-screen content and the physical surrounding. On the one hand, spatial relations define opportunities for mixing environments, such as the adaptation of the digital augmentation to the user's location. On the other hand, spatial relations involve specific constraints for interaction such as the impact of hand tremor on on-screen camera image stability. The question is then, how can we relax spatial constraints while maintaining the feeling of digital-physical collocation. Our contribution is three-fold. First, we propose a design space for handheld AR on-screen content with a particular focus on the spatial relations between the different identified frames of reference. This design space defines a framework for systematically studying interaction with handheld AR applications. Second, we propose and evaluate different handheld AR pointing techniques to improve pointing precision. Indeed, with handheld AR set-up, both touch-screen input and the spatial relations between the on-screen content and the physical surrounding impair the precision of pointing. Third, as part of a collaborative research project involving AIST-Tsukuba and Schneider- France and Japan, we developed a toolkit supporting the development of handheld AR applications. The toolkit has been used to develop several demonstrators
APA, Harvard, Vancouver, ISO, and other styles
27

Nielsen, Curtis W. "Using Augmented Virtuality to Improve Human-Robot Interactions." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1170.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hiolle, Antoine. "A developmental approach to the study of affective bonds for human-robot interaction." Thesis, University of Hertfordshire, 2015. http://hdl.handle.net/2299/16566.

Full text
Abstract:
Robotics agents are meant to play an increasingly larger role in our everyday lives. To be successfully integrated in our environment, robots will need to develop and display adaptive, robust, and socially suitable behaviours. To tackle these issues, the robotics research community has invested a considerable amount of efforts in modelling robotic architectures inspired by research on living systems, from ethology to developmental psychology. Following a similar approach, this thesis presents the research results of the modelling and experimental testing of robotic architectures based on affective and attachment bonds between young infants and their primary caregiver. I follow a bottom-up approach to the modelling of such bonds, examining how they can promote the situated development of an autonomous robot. Specifically, the models used and the results from the experiments carried out in laboratory settings and with naive users demonstrate the impact such affective bonds have on the learning outcomes of an autonomous robot and on the perception and behaviour of humans. This research leads to the emphasis on the importance of the interplay between the dynamics of the regulatory behaviours performed by a robot and the responsiveness of the human partner. The coupling of such signals and behaviours in an attachment-like dyad determines the nature of the outcomes for the robot, in terms of learning or the satisfaction of other needs. The experiments carried out also demonstrate of the attachment system can help a robot adapt its own social behaviour to that of the human partners, as infants are thought to do during their development.
APA, Harvard, Vancouver, ISO, and other styles
29

Hornfeck, Kenneth B. "A Customizable Socially Interactive Robot with Wireless Health Monitoring Capability." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1301595272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Senft, Emmanuel. "Teaching robots social autonomy from in situ human supervision." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/13077.

Full text
Abstract:
Traditionally the behaviour of social robots has been programmed. However, increasingly there has been a focus on letting robots learn their behaviour to some extent from example or through trial and error. This on the one hand excludes the need for programming, but also allows the robot to adapt to circumstances not foreseen at the time of programming. One such occasion is when the user wants to tailor or fully specify the robot's behaviour. The engineer often has limited knowledge of what the user wants or what the deployment circumstances specifically require. Instead, the user does know what is expected from the robot and consequently, the social robot should be equipped with a mechanism to learn from its user. This work explores how a social robot can learn to interact meaningfully with people in an efficient and safe way by learning from supervision by a human teacher in control of the robot's behaviour. To this end we propose a new machine learning framework called Supervised Progressively Autonomous Robot Competencies (SPARC). SPARC enables non-technical users to control and teach a robot, and we evaluate its effectiveness in Human-Robot Interaction (HRI). The core idea is that the user initially remotely operates the robot, while an algorithm associates actions to states and gradually learns. Over time, the robot takes over the control from the user while still giving the user oversight of the robot's behaviour by ensuring that every action executed by the robot has been actively or passively approved by the user. This is particularly important in HRI, as interacting with people, and especially vulnerable users, is a complex and multidimensional problem, and any errors by the robot may have negative consequences for the people involved in the interaction. Through the development and evaluation of SPARC, this work contributes to both HRI and Interactive Machine Learning, especially on how autonomous agents, such as social robots, can learn from people and how this specific teacher-robot interaction impacts the learning process. We showed that a supervised robot learning from their user can reduce the workload of this person, and that providing the user with the opportunity to control the robot's behaviour substantially improves the teaching process. Finally, this work also demonstrated that a robot supervised by a user could learn rich social behaviours in the real world, in a large multidimensional and multimodal sensitive environment, as a robot learned quickly (25 interactions of 4 sessions during in average 1.9 minutes) to tutor children in an educational game, achieving similar behaviours and educational outcomes compared to a robot fully controlled by the user, both providing 10 to 30% improvement in game metrics compared to a passive robot.
APA, Harvard, Vancouver, ISO, and other styles
31

Bartholomew, Paul D. "Optimal behavior composition for robotics." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51872.

Full text
Abstract:
The development of a humanoid robot that mimics human motion requires extensive programming as well as understanding the motion limitations of the robot. Programming the countless possibilities for a robot’s response to observed human motion can be time consuming. To simplify this process, this thesis presents a new approach for mimicking captured human motion data through the development of a composition routine. This routine is built upon a behavior-based framework and is coupled with optimization by calculus to determine the appropriate weightings of predetermined motion behaviors. The completion of this thesis helps to fill a void in human/robot interactions involving mimicry and behavior-based design. Technological advancements in the way computers and robots identify human motion and determine for themselves how to approximate that motion have helped make possible the mimicry of observed human subjects. In fact, many researchers have developed humanoid systems that are capable of mimicking human motion data; however, these systems do not use behavior-based design. This thesis will explain the framework and theory behind our optimal behavior composition algorithm and the selection of sinusoidal motion primitives that make up a behavior library. This algorithm breaks captured motion data into various time intervals, then optimally weights the defined behaviors to best approximate the captured data. Since this routine does not reference previous or following motion sequences, discontinuities may exist between time intervals. To address this issue, the addition of a PI controller to regulate and smooth out the transitions between time intervals will be shown. The effectiveness of using the optimal behavior composition algorithm to create an approximated motion that mimics capture motion data will be demonstrated through an example configuration of hardware and a humanoid robot platform. An example of arm motion mimicry will be presented and includes various image sequences from the mimicry as well as trajectories containing the joint positions for both the human and the robot.
APA, Harvard, Vancouver, ISO, and other styles
32

Cardenas, Irvin Steve. "Blockchain, Smart Contracts and Cryptocurrencies in Robotics: \\Use Cases, Economics, and Human-Robot Interaction." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1608314228745536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Podevijn, Gaetan. "Effects of the Interaction with Robot Swarms on the Human Psychological State." Doctoral thesis, Universite Libre de Bruxelles, 2017. https://dipot.ulb.ac.be/dspace/bitstream/2013/245391/5/ConratGP.pdf.

Full text
Abstract:
Human-swarm interaction studies how human beings can interact with a robotswarm---a large number of robots cooperating with each other without any form of centralizedcontrol. In today's human-swarm interaction literature, the large majority of the works investigatehow human beings can issue commands to and receive feedback from a robot swarm. However, only a few ofthese works study the effect of the interaction with a robot swarm on human psychology (e.g. on thehuman stress or on the human workload). Understanding human psychology in human-swarm interaction isimportant because the human psychological state can have significant impact on the way humansinteract with robot swarms (e.g. a high level of stress can cause a human operator to freeze in themiddle of a critical task, such as a search-and-rescue task). Most existing works that study human psychology in human-swarm interaction conduct their experimentsusing robot swarms simulated on a computer screen. The use of simulation is convenient becauseexperimental conditions can be repeated perfectly in different experimental runs and becauseexperimentation using real robots is expensive both in money and time. However, simulation suffersfrom the so-called reality gap: the inherent discrepancy between simulation and reality. Itis therefore important to study whether this inherent discrepancy can affect humanpsychology---human operators interacting with a simulated robot swarm can react differently thanwhen interacting with a real robot swarm.A large literature in human-robot interaction has studied the psychological impact of theinteraction between human beings and single robots. This literature could in principle be highlyrelevant to human-swarm interaction. However, an inherent difference between human-robot interactionand human-swarm interaction is that in the latter, human operators interact with a large number ofrobots. This large number of robots can affect human psychology---human operators interacting with alarge number of robots can react differently than when interacting with a single robot or with asmall number of robots. It is therefore important to understand whether the large number of robotsthat composes a robot swarm affects human psychology. In fact, if this is the case, it would not bepossible to directly apply the results of human-robot interaction research to human-swarminteraction.We conducted several experiments in order to understand the effect of the reality gap and the effectof the group size (i.e. the number of robots that composes a robot swarm) on the humanpsychological state. In these experiments our participants are exposed to swarms of robots and arepurely passive---they do not issue commands nor receive feedback from the robots. Making theinteraction passive allowed us to study the effects of the reality gap and of the group size on thehuman psychological state without the risk that an interaction interface (such as a joystick)influences the psychological responses of the participants (and thus limiting the visibility of both thereality gap and group size effects). In the reality gap experiments, participants are exposed tosimulated robot swarms displayed either on a computer screen or in a virtual reality environment, and toreal robot swarms. In the group size experiments, participants are exposed to an increasing numberof real robots.In this thesis, we show that the reality gap and the group size affect the human psychological stateby collecting psychophysiological measures (heart rate and skin conductance), self-reported (viaquestionnaires) affective state measures (arousal and valence), self-reported workload (the amountof mental resource needed to carry out a task) and reaction time (the time needed to respond to astimulus). Firstly, we show with our results that our participants' psychophysiological measures,affective state measures, workload and reaction time are significantly higher when they interactwith a real robot swarm compared to when they interact with a robot swarm simulated on a computerscreen, confirming that the reality gap significantly affects the human psychological state.Moreover, we show that it is possible to mitigate the effect of the reality gap using virtualreality---our participants' arousal, workload and reaction time are significantly higher when theyinteract with a simulated robot swarm displayed in a virtual reality environment as opposed to whenit is displayed on a computer screen. Secondly, we show that our participants' psychophysiologicalmeasures and affective state measures increase when the number of robots they are exposed toincreases. Our results have important implications for research in human-swarm interaction. Firstly, for thefirst time, we show that experiments in simulation change the human psychological state compared toexperiments with real robots. Secondly, we show that a characteristic that is inherent to thedefinition of swarm robotics---the large number of robots that composes a robotswarm---significantly affects the human psychological state. Finally, our results show thatpsychophysiological measures, such as heart rate and skin conductance, provide researchers with moreinformation on human psychology than the information provided by using traditional self-reportedmeasures (collected via psychological questionnaires).
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
34

Sung, Ja-Young. "Towards the human-centered design of everyday robots." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39539.

Full text
Abstract:
The recent advancement of robotic technology brings robots closer to assisting us in our everyday spaces, providing support for healthcare, cleaning, entertaining and other tasks. In this dissertation, I refer to these robots as everyday robots. Scholars argue that the key to successful human acceptance lies in the design of robots that have the ability to blend into everyday activities. A challenge remains; robots are an autonomous technology that triggers multi-faceted interactions: physical, intellectual, social and emotional, making their presence visible and even obtrusive. These challenges need more than technological advances to be resolved; more human-centered approaches are required in the design. However to date, little is known about how to support that human-centered design of everyday robots. In this thesis, I address this gap by introducing an initial set of design guidelines for everyday robots. These guidelines are based on four empirical studies undertaken to identify how people live with robots in the home. These studies mine insights about what interaction attributes of everyday robots elicit positive or negative user responses. The guidelines were deployed in the development of one type of everyday robot: a senior-care robot called HomeMate. It shows that the guidelines become useful during the early development process by helping designers and robot engineers to focus on how social and emotional values of end-users influence the design of the technical functions required. Overall, this thesis addresses a question how we can support the design of everyday robots to become more accepted by users. I respond to this question by proposing a set of design guidelines that account for lived experiences of robots in the home, which ultimately can improve the adoption and use of everyday robots.
APA, Harvard, Vancouver, ISO, and other styles
35

Moshkina, Lilia V. "An integrative framework of time-varying affective robotic behavior." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39568.

Full text
Abstract:
As robots become more and more prevalent in our everyday life, making sure that our interactions with them are natural and satisfactory is of paramount importance. Given the propensity of humans to treat machines as social actors, and the integral role affect plays in human life, providing robots with affective responses is a step towards making our interaction with them more intuitive. To the end of promoting more natural, satisfying and effective human-robot interaction and enhancing robotic behavior in general, an integrative framework of time-varying affective robotic behavior was designed and implemented on a humanoid robot. This psychologically inspired framework (TAME) encompasses 4 different yet interrelated affective phenomena: personality Traits, affective Attitudes, Moods and Emotions. Traits determine consistent patterns of behavior across situations and environments and are generally time-invariant; attitudes are long-lasting and reflect likes or dislikes towards particular objects, persons, or situations; moods are subtle and relatively short in duration, biasing behavior according to favorable or unfavorable conditions; and emotions provide a fast yet short-lived response to environmental contingencies. The software architecture incorporating the TAME framework was designed as a stand-alone process to promote platform-independence and applicability to other domains. In this dissertation, the effectiveness of affective robotic behavior was explored and evaluated in a number of human-robot interaction studies with over 100 participants. In one of these studies, the impact of Negative Mood and emotion of Fear was assessed in a mock-up search-and-rescue scenario, where the participants found the robot expressing affect more compelling, sincere, convincing and "conscious" than its non-affective counterpart. Another study showed that different robotic personalities are better suited for different tasks: an extraverted robot was found to be more welcoming and fun for a task as a museum robot guide, where an engaging and gregarious demeanor was expected; whereas an introverted robot was rated as more appropriate for a problem solving task requiring concentration. To conclude, multi-faceted robotic affect can have far-reaching practical benefits for human-robot interaction, from making people feel more welcome where gregariousness is expected to making unobtrusive partners for problem solving tasks to saving people's lives in dangerous situations.
APA, Harvard, Vancouver, ISO, and other styles
36

Ward, James L. "A Comparison of Fuzzy Logic Spatial Relationship Methods for Human Robot Interaction." NCSU, 2009. http://www.lib.ncsu.edu/theses/available/etd-12172008-125840/.

Full text
Abstract:
As the science of robotics advances, robots are interacting with people more frequently. Robots are appearing in our houses and places of work acting as assistants in many capacities. One aspect of this interaction is determining spatial relationships between objects. People and robots simply can not communicate effectively without references to the physical world and how those objects relate to each other. In this research fuzzy logic is used to help determine the spatial relationships between objects as fuzzy logic lends itself to the inherent imprecision of spatial relationships. Objects are rarely absolutely in front of or to the right of another, especially when dealing with multiple objects. This research compares three methods of fuzzy logic, the angle aggregation method, the centroid method and the histogram of angles â composition method. First we use a robot to gather real world data on the geometries between objects, and then we adapt the fuzzy logic techniques for the geometry between objects from the robot's perspective which is then used on the generated robot data. Last we perform an in depth analysis comparing the three techniques with the human survey data to determine which may predict spatial relationships most accurately under these conditions as a human would. Previous research mainly focused on determining spatial relationships from an allocentric, or bird's eye view, where here we apply some of the same techniques to determine spatial relationships from an egocentric, or observer's point of view.
APA, Harvard, Vancouver, ISO, and other styles
37

Iqbal, Muhammad Zubair. "Design of Soft Rigid Devices for Assistive Robotics and Industrial Applications." Doctoral thesis, Università di Siena, 2021. http://hdl.handle.net/11365/1152251.

Full text
Abstract:
Soft robots are getting more and more popular in rehabilitation and industrial scenarios. They often come into play where the rigid robots fail to perform certain functions. The advantage of using soft robots lies in the fact that they can easily conform to the obstacles and depict delicacy in gripping, manipulating, and controlling deformable and fragile objects without causing them any harm. In rehabilitation scenarios, devices developed on the concept of soft robots are pretty helpful in changing the lives of those who suffer body impairments due to stroke or any other accident. These devices provide support in carrying out daily life activities without the need and support of another person. Also, these devices are beneficial in the training phase where the patient is going through the rehabilitation phase and has to do multiple exercises of the upper limb, wrist, or hand. Similarly, the grippers developed on the basic principle of soft robots are very common in the industries or at least getting common. Their advantages are a lot as compared to the rigid robotics manipulators. Soft grippers tend to adapt to the shape of the object without causing any damage to it, providing a stable grasp. It can also help reduce the complexity in the design and development, for example, underactuated. Underactuated grippers use the minimum number of actuators to provide the same function that requires more actuators with a rigid gripper. Also, the soft structure allows to design specific trajectories to complete a certain grasping and manipulation task. This thesis presents devices for rehabilitation and assistive application to help people with upper limb impairment, especially wrist and hand functions. These devices have been designed to provide the people, with limited capabilities of hand and wrist functions, to live their lives with ease without being dependent on any other family member. Similarly, I present different soft grippers and a soft environment that provides different advantages and can do various grasp and manipulation tasks. I have presented results for each device, rehabilitation and assistive devices are used by a patient suffering from stroke and having limited movement of wrist and hand function. At the same time, the grippers are supported with a set of experiments that provide deep insight into the advantages of each gripper in industrial applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Delaunay, Frédéric C. "A retro-projected robotic head for social human-robot interaction." Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/4871.

Full text
Abstract:
As people respond strongly to faces and facial features, both consciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an original contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving software designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial expressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncontrolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evaluations present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face.
APA, Harvard, Vancouver, ISO, and other styles
39

Ross, Jennifer. "MODERATORS OF TRUST AND RELIANCE ACROSS MULTIPLE DECISION AIDS." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3975.

Full text
Abstract:
The present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness – a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate's actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., 'what' the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users' trust in automation so that system interfaces can be designed to facilitate users' calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems.
Ph.D.
Department of Psychology
Sciences
Psychology PhD
APA, Harvard, Vancouver, ISO, and other styles
40

Förster, Frank. "Robots that say 'no' : acquisition of linguistic behaviour in interaction games with humans." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/20781.

Full text
Abstract:
Negation is a part of language that humans engage in pretty much from the onset of speech. Negation appears at first glance to be harder to grasp than object or action labels, yet this thesis explores how this family of ‘concepts’ could be acquired in a meaningful way by a humanoid robot based solely on the unconstrained dialogue with a human conversation partner. The earliest forms of negation appear to be linked to the affective or motivational state of the speaker. Therefore we developed a behavioural architecture which contains a motivational system. This motivational system feeds its state simultaneously to other subsystems for the purpose of symbol-grounding but also leads to the expression of the robot’s motivational state via a facial display of emotions and motivationally congruent body behaviours. In order to achieve the grounding of negative words we will examine two different mechanisms which provide an alternative to the established grounding via ostension with or without joint attention. Two large experiments were conducted to test these two mechanisms. One of these mechanisms is so called negative intent interpretation, the other one is a combination of physical and linguistic prohibition. Both mechanisms have been described in the literature on early child language development but have never been used in human-robot-interaction for the purpose of symbol grounding. As we will show, both mechanisms may operate simultaneously and we can exclude none of them as potential ontogenetic origin of negation.
APA, Harvard, Vancouver, ISO, and other styles
41

Brooks, Douglas A. "Towards quantifying upper-arm rehabilitation metrics for children through interaction with a humanoid robot." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/48970.

Full text
Abstract:
The objective of this research effort is to further rehabilitation techniques for children by developing and validating the core technologies needed to integrate therapy instruction with child-robot play interaction in order to improve upper-arm rehabilitation. Using computer vision techniques such as Motion History Imaging (MHI), Multimodal Mean, edge detection, and Random Sample Consensus (RANSAC), movements can be quantified through robot observation. Also incorporating three-dimensional data obtained via an infrared projector coupled with a Principle Component Analysis (PCA), depth information can be utilized to create a robust algorithm. Finally, utilizing prior knowledge regarding exercise data, physical therapeutic metrics, and novel approaches, a mapping to therapist instructions can be created allowing robotic feedback and intelligent interaction.
APA, Harvard, Vancouver, ISO, and other styles
42

Modi, Kalpesh Prakash. "Vision application of human robot interaction : development of a ping pong playing robotic arm /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Puehn, Christian G. "Development of a Low-Cost Social Robot for Personalized Human-Robot Interaction." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1427889195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Krum, David Michael. "Wearable Computers and Spatial Cognition." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4784.

Full text
Abstract:
Human beings live and work in large and complex environments. It is often difficult for individuals to perceive and understand the structure of these environments. However, the formation of an accurate and reliable cognitive map, a mental model of the environment, is vital for optimal navigation and coordination. The need to develop a reliable cognitive map is common to the average individual as well as workers with more specialized tasks, for example, law enforcement or military personnel who must quickly learn to operate in a new area. In this dissertation, I propose the use of a wearable computer as a platform for a spatial cognition aid. This spatial cognition aid uses terrain visualization software, GPS positioning, orientation sensors, and an eyeglass mounted display to provide an overview of the surrounding environment. While there are a number of similar mobile or wearable computer systems that function as tourist guides, navigation aids, and surveying tools, there are few examples of spatial cognition aids. I present an architecture for the wearable computer based spatial cognition aid using a relationship mediation model for wearable computer applications. The relationship mediation model identifies and describes the user relationships in which a wearable computer can participate and mediate. The dissertation focuses on how the wearable computer mediates the users perception of the environment. Other components such as interaction techniques and a scalable system of servers for distributing spatial information are also discussed. Several user studies were performed to determine an effective presentation for the spatial cognition aid. Participants were led through an outdoor environment while using different presentations on a wearable computer. The spatial learning of the participants was compared. These studies demonstrated that a wearable computer can be an effective spatial cognition aid. However, factors such as such as mental rotation, cognitive load, distraction, and divided attention must be taken into account when presenting spatial information to a wearable computer user.
APA, Harvard, Vancouver, ISO, and other styles
45

Pendleton, Brian O. "Human-Swarm Interaction: Effects on Operator Workload, Scale, and Swarm Topology." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3999.

Full text
Abstract:
Robots, including UAVs, have found increasing use in helping humans with dangerous and difficult tasks. The number of robots in use is increasing and is likely to continue increasing in the future. As the number of robots increases, human operators will need to coordinate and control the actions of large teams of robots. While multi-robot supervisory control has been widely studied, it requires that an operator divide his or her attention between robots. Consequently, the use of multi-robot supervisory control is limited by the number of robots that a human or team of humans can reasonably control. Swarm robotics -- large numbers of low-cost robots displaying collective behaviors -- offers an alternative approach by providing the operator with a small set of inputs and parameters that alter the behavior of a large number of autonomous or semi-autonomous robots. Researchers have asserted that this approach is more scalable and offers greater promise for managing huge numbers of robots. The emerging field of Human-Swarm Interaction (HSI) deals with the effective management of swarms by human operators. In this thesis we offer foundational work on the effect of HSI (a) on the individual robots, (b) on the group as a whole, and (c) on the workload of the human operator. We (1) show that existing general swarm algorithms are feasible on existing robots and can display collective behaviors as shown in simulations in the literature, (2) analyze the effect of interaction style and neighborhood type on the swarm's topology, (3) demonstrate that operator workload stays stable as the size of the swarm increases, but (4) find that operator workload is influenced by the interaction style. We also present considerations for swarm deployment on real robots.
APA, Harvard, Vancouver, ISO, and other styles
46

Cakmak, Maya. "Guided teaching interactions with robots: embodied queries and teaching heuristics." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44734.

Full text
Abstract:
The vision of personal robot assistants continues to become more realistic with technological advances in robotics. The increase in the capabilities of robots, presents boundless opportunities for them to perform useful tasks for humans. However, it is not feasible for engineers to program robots for all possible uses. Instead, we envision general-purpose robots that can be programmed by their end-users. Learning from Demonstration (LfD), is an approach that allows users to program new capabilities on a robot by demonstrating what is required from the robot. Although LfD has become an established area of Robotics, many challenges remain in making it effective and intuitive for naive users. This thesis contributes to addressing these challenges in several ways. First, the problems that occur in teaching-learning interactions between humans and robots are characterized through human-subject experiments in three different domains. To address these problems, two mechanisms for guiding human teachers in their interactions are developed: embodied queries and teaching heuristics. Embodied queries, inspired from Active Learning queries, are questions asked by the robot so as to steer the teacher towards providing more informative demonstrations. They leverage the robot's embodiment to physically manipulate the environment and to communicate the question. Two technical contributions are made in developing embodied queries. The first is Active Keyframe-based LfD -- a framework for learning human-segmented skills in continuous action spaces and producing four different types of embodied queries to improve learned skills. The second is Intermittently-Active Learning in which a learner makes queries selectively, so as to create balanced interactions with the benefits of fully-active learning. Empirical findings from five experiments with human subjects are presented. These identify interaction-related issues in generating embodied queries, characterize human question asking, and evaluate implementations of Intermittently-Active Learning and Active Keyframe-based LfD on the humanoid robot Simon. The second mechanism, teaching heuristics, is a set of instructions given to human teachers in order to elicit more informative demonstrations from them. Such instructions are devised based on an understanding of what constitutes an optimal teacher for a given learner, with techniques grounded in Algorithmic Teaching. The utility of teaching heuristics is empirically demonstrated through six human-subject experiments, that involve teaching different concepts or tasks to a virtual agent, or teaching skills to Simon. With a diverse set of human subject experiments, this thesis demonstrates the necessity for guiding humans in teaching interactions with robots, and verifies the utility of two proposed mechanisms in improving sample efficiency and final performance, while enhancing the user interaction.
APA, Harvard, Vancouver, ISO, and other styles
47

Patacchiola, Massimiliano. "A developmental model of trust in humanoid robots." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/12828.

Full text
Abstract:
Trust between humans and artificial systems has recently received increased attention due to the widespread use of autonomous systems in our society. In this context trust plays a dual role. On the one hand it is necessary to build robots that are perceived as trustworthy by humans. On the other hand we need to give to those robots the ability to discriminate between reliable and unreliable informants. This thesis focused on the second problem, presenting an interdisciplinary investigation of trust, in particular a computational model based on neuroscientific and psychological assumptions. First of all, the use of Bayesian networks for modelling causal relationships was investigated. This approach follows the well known theory-theory framework of the Theory of Mind (ToM) and an established line of research based on the Bayesian description of mental processes. Next, the role of gaze in human-robot interaction has been investigated. The results of this research were used to design a head pose estimation system based on Convolutional Neural Networks. The system can be used in robotic platforms to facilitate joint attention tasks and enhance trust. Finally, everything was integrated into a structured cognitive architecture. The architecture is based on an actor-critic reinforcement learning framework and an intrinsic motivation feedback given by a Bayesian network. In order to evaluate the model, the architecture was embodied in the iCub humanoid robot and used to replicate a developmental experiment. The model provides a plausible description of children's reasoning that sheds some light on the underlying mechanism involved in trust-based learning. In the last part of the thesis the contribution of human-robot interaction research is discussed, with the aim of understanding the factors that influence the establishment of trust during joint tasks. Overall, this thesis provides a computational model of trust that takes into account the development of cognitive abilities in children, with a particular emphasis on the ToM and the underlying neural dynamics.
APA, Harvard, Vancouver, ISO, and other styles
48

Read, Robin. "A study of non-linguistic utterances for social human-robot interaction." Thesis, University of Plymouth, 2014. http://hdl.handle.net/10026.1/3028.

Full text
Abstract:
The world of animation has painted an inspiring image of what the robots of the future could be. Taking the robots R2D2 and C3PO from the Star Wars films as representative examples, these robots are portrayed as being more than just machines, rather, they are presented as intelligent and capable social peers, exhibiting many of the traits that people have also. These robots have the ability to interact with people, understand us, and even relate to us in very personal ways through a wide repertoire of social cues. As robotic technologies continue to make their way into society at large, there is a growing trend toward making social robots. The field of Human-Robot Interaction concerns itself with studying, developing and realising these socially capable machines, equipping them with a very rich variety of capabilities that allow them to interact with people in natural and intuitive ways, ranging from the use of natural language, body language and facial gestures, to more unique ways such as expression through colours and abstract sounds. This thesis studies the use of abstract, expressive sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these. This work presents a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people's affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.
APA, Harvard, Vancouver, ISO, and other styles
49

Cross, E. Vincent Gilbert Juan E. "Human coordination of robot teams an empirical study of multimodal interface design /." Auburn, Ala, 2009. http://hdl.handle.net/10415/1701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wall, Steven A. "An investigation of temporal and spatial limitations of haptic devices." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography