Literatura científica selecionada sobre o tema "Human-robot interaction"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Human-robot interaction".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Human-robot interaction"
Takamatsu, Jun. "Human-Robot Interaction". Journal of the Robotics Society of Japan 37, n.º 4 (2019): 293–96. http://dx.doi.org/10.7210/jrsj.37.293.
Texto completo da fonteJia, Yunyi, Biao Zhang, Miao Li, Brady King e Ali Meghdari. "Human-Robot Interaction". Journal of Robotics 2018 (1 de outubro de 2018): 1–2. http://dx.doi.org/10.1155/2018/3879547.
Texto completo da fonteMurphy, Robin, Tatsuya Nomura, Aude Billard e Jennifer Burke. "Human–Robot Interaction". IEEE Robotics & Automation Magazine 17, n.º 2 (junho de 2010): 85–89. http://dx.doi.org/10.1109/mra.2010.936953.
Texto completo da fonteSethumadhavan, Arathi. "Human-Robot Interaction". Ergonomics in Design: The Quarterly of Human Factors Applications 20, n.º 3 (julho de 2012): 27–28. http://dx.doi.org/10.1177/1064804612449796.
Texto completo da fonteSheridan, Thomas B. "Human–Robot Interaction". Human Factors: The Journal of the Human Factors and Ergonomics Society 58, n.º 4 (20 de abril de 2016): 525–32. http://dx.doi.org/10.1177/0018720816644364.
Texto completo da fonteJones, Keith S., e Elizabeth A. Schmidlin. "Human-Robot Interaction". Reviews of Human Factors and Ergonomics 7, n.º 1 (25 de agosto de 2011): 100–148. http://dx.doi.org/10.1177/1557234x11410388.
Texto completo da fonteThomaz, Andrea, Guy Hoffman e Maya Cakmak. "Computational Human-Robot Interaction". Foundations and Trends in Robotics 4, n.º 2-3 (2016): 104–223. http://dx.doi.org/10.1561/2300000049.
Texto completo da fonteKarniel, Amir, Angelika Peer, Opher Donchin, Ferdinando A. Mussa-Ivaldi e Gerald E. Loeb. "Haptic Human-Robot Interaction". IEEE Transactions on Haptics 5, n.º 3 (2012): 193–95. http://dx.doi.org/10.1109/toh.2012.47.
Texto completo da fontePook, Polly K., e Dana H. Ballard. "Deictic human/robot interaction". Robotics and Autonomous Systems 18, n.º 1-2 (julho de 1996): 259–69. http://dx.doi.org/10.1016/0921-8890(95)00080-1.
Texto completo da fonteYoung, James E., JaYoung Sung, Amy Voida, Ehud Sharlin, Takeo Igarashi, Henrik I. Christensen e Rebecca E. Grinter. "Evaluating Human-Robot Interaction". International Journal of Social Robotics 3, n.º 1 (1 de outubro de 2010): 53–67. http://dx.doi.org/10.1007/s12369-010-0081-8.
Texto completo da fonteTeses / dissertações sobre o assunto "Human-robot interaction"
Kruse, Thibault. "Planning for human robot interaction". Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30059/document.
Texto completo da fonteThe recent advances in robotics inspire visions of household and service robots making our lives easier and more comfortable. Such robots will be able to perform several object manipulation tasks required for household chores, autonomously or in cooperation with humans. In that role of human companion, the robot has to satisfy many additional requirements compared to well established fields of industrial robotics. The purpose of planning for robots is to achieve robot behavior that is goal-directed and establishes correct results. But in human-robot-interaction, robot behavior cannot merely be judged in terms of correct results, but must be agree-able to human stakeholders. This means that the robot behavior must suffice additional quality criteria. It must be safe, comfortable to human, and intuitively be understood. There are established practices to ensure safety and provide comfort by keeping sufficient distances between the robot and nearby persons. However providing behavior that is intuitively understood remains a challenge. This challenge greatly increases in cases of dynamic human-robot interactions, where the actions of the human in the future are unpredictable, and the robot needs to constantly adapt its plans to changes. This thesis provides novel approaches to improve the legibility of robot behavior in such dynamic situations. Key to that approach is not to merely consider the quality of a single plan, but the behavior of the robot as a result of replanning multiple times during an interaction. For navigation planning, this thesis introduces directional cost functions that avoid problems in conflict situations. For action planning, this thesis provides the approach of local replanning of transport actions based on navigational costs, to provide opportunistic behavior. Both measures help human observers understand the robot's beliefs and intentions during interactions and reduce confusion
Bodiroža, Saša. "Gestures in human-robot interaction". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17705.
Texto completo da fonteGestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
Miners, William Ben. "Toward Understanding Human Expression in Human-Robot Interaction". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.
Texto completo da fonteAn intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.
Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.
This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.
The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
Akan, Batu. "Human Robot Interaction Solutions for Intuitive Industrial Robot Programming". Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14315.
Texto completo da fonterobot colleague project
Topp, Elin Anna. "Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping". Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.
Texto completo da fonteHuang, Chien-Ming. "Joint attention in human-robot interaction". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.
Texto completo da fonteBremner, Paul. "Conversational gestures in human-robot interaction". Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557106.
Texto completo da fonteFiore, Michelangelo. "Decision Making in Human-Robot Interaction". Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0049/document.
Texto completo da fonteThere has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users
Alanenpää, Madelene. "Gaze detection in human-robot interaction". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.
Texto completo da fonteAlmeida, Luís Miguel Martins. "Human-robot interaction for object transfer". Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22374.
Texto completo da fonteRobots come into physical contact with humans under a variety of circumstances to perform useful work. This thesis has the ambitious aim of contriving a solution that leads to a simple case of physical human-robot interaction, an object transfer task. Firstly, this work presents a review of the current research within the field of Human-Robot Interaction, where two approaches are distinguished, but simultaneously required: a pre-contact approximation and an interaction by contact. Further, to achieve the proposed objectives, this dissertation addresses a possible answer to three major problems: (1) The robot control to perform the inherent movements of the transfer assignment, (2) the human-robot pre interaction and (3) the interaction by contact. The capabilities of a 3D sensor and force/tactile sensors are explored in order to prepare the robot to handover an object and to control the robot gripper actions, correspondingly. The complete software development is supported by the Robot Operating System (ROS) framework. Finally, some experimental tests are conducted to validate the proposed solutions and to evaluate the system's performance. A possible transfer task is achieved, even if some refinements, improvements and extensions are required to improve the solution performance and range.
Os robôs entram em contacto físico com os humanos sob uma variedade de circunstâncias para realizar trabalho útil. Esta dissertação tem como objetivo o desenvolvimento de uma solução que permita um caso simples de interação física humano-robô, uma tarefa de transferência de objetos. Inicialmente, este trabalho apresenta uma revisão da pesquisa corrente na área da interação humano-robô, onde duas abordagens são distinguíveis, mas simultaneamente necessárias: uma aproximação pré-contacto e uma interação pós-contacto. Seguindo esta linha de pensamento, para atingir os objetivos propostos, esta dissertação procura dar resposta a três grandes problemas: (1) O controlo do robô para que este desempenhe os movimentos inerentes á tarefa de transferência, (2) a pré-interação humano-robô e (3) a interação por contacto. As capacidades de um sensor 3D e de sensores de força são exploradas com o objetivo de preparar o robô para a transferência e de controlar as ações da garra robótica, correspondentemente. O desenvolvimento de arquitetura software é suportado pela estrutura Robot Operating System (ROS). Finalmente, alguns testes experimentais são realizados para validar as soluções propostas e para avaliar o desempenho do sistema. Uma possível transferência de objetos é alcançada, mesmo que sejam necessários alguns refinamentos, melhorias e extensões para melhorar o desempenho e abrangência da solução.
Livros sobre o assunto "Human-robot interaction"
Jost, Céline, Brigitte Le Pévédic, Tony Belpaeme, Cindy Bethel, Dimitrios Chrysostomou, Nigel Crook, Marine Grandgeorge e Nicole Mirnig, eds. Human-Robot Interaction. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42307-0.
Texto completo da fonteMansour, Rahimi, e Karwowski Waldemar 1953-, eds. Human-robot interaction. London: Taylor & Francis, 1992.
Encontre o texto completo da fontePrassler, Erwin, Gisbert Lawitzky, Andreas Stopp, Gerhard Grunwald, Martin Hägele, Rüdiger Dillmann e Ioannis Iossifidis, eds. Advances in Human-Robot Interaction. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/b97960.
Texto completo da fonteGoodrich, Michael A. Human-robot interaction: A survey. Hanover: Now Publishers, 2007.
Encontre o texto completo da fonteXing, Bo, e Tshilidzi Marwala. Smart Maintenance for Human–Robot Interaction. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-67480-3.
Texto completo da fonteAyanoğlu, Hande, e Emília Duarte, eds. Emotional Design in Human-Robot Interaction. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-96722-6.
Texto completo da fonteDautenhahn, Kerstin, e Joe Saunders, eds. New Frontiers in Human–Robot Interaction. Amsterdam: John Benjamins Publishing Company, 2011. http://dx.doi.org/10.1075/ais.2.
Texto completo da fonteWang, Xiangyu, ed. Mixed Reality and Human-Robot Interaction. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-0582-1.
Texto completo da fonteWang, Xiangyu. Mixed Reality and Human-Robot Interaction. Dordrecht: Springer Science+Business Media B.V., 2011.
Encontre o texto completo da fonteShiomi, Masahiro, e Hidenobu Sumioka. Social Touch in Human–Robot Interaction. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003384274.
Texto completo da fonteCapítulos de livros sobre o assunto "Human-robot interaction"
Sidobre, Daniel, Xavier Broquère, Jim Mainprice, Ernesto Burattini, Alberto Finzi, Silvia Rossi e Mariacarla Staffa. "Human–Robot Interaction". In Springer Tracts in Advanced Robotics, 123–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29041-1_3.
Texto completo da fonteBillard, Aude, e Daniel Grollman. "Human-Robot Interaction". In Encyclopedia of the Sciences of Learning, 1474–76. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_760.
Texto completo da fonteOhnishi, Kouhei. "Human–Robot Interaction". In Mechatronics and Robotics, 255–64. Boca Raton : CRC Press, 2020.: CRC Press, 2020. http://dx.doi.org/10.1201/9780429347474-12.
Texto completo da fonteAyanoğlu, Hande, e João S. Sequeira. "Human-Robot Interaction". In Human–Computer Interaction Series, 39–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-96722-6_3.
Texto completo da fonteFeil-Seifer, David, e Maja J. Matarić. "Human-Robot Interaction". In Encyclopedia of Complexity and Systems Science, 1–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-642-27737-5_274-5.
Texto completo da fonteEdwards, Autumn. "Human–Robot Interaction". In The Sage Handbook of Human–Machine Communication, 167–77. 1 Oliver's Yard, 55 City Road London EC1Y 1SP: SAGE Publications Ltd, 2023. http://dx.doi.org/10.4135/9781529782783.n21.
Texto completo da fonteEsterwood, Connor, Qiaoning Zhang, X. Jessie Yang e Lionel P. Robert. "Human–Robot Interaction". In Human-Computer Interaction in Intelligent Environments, 305–32. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003490685-10.
Texto completo da fonteIr, André Pirlet. "The Role of Standardization in Technical Regulations". In Human–Robot Interaction, 1–8. Boca Raton, FL : CRC Press/Taylor & Francis Group, [2019]: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9781315213781-1.
Texto completo da fonteTakács, Árpád, Imre J. Rudas e Tamás Haidegger. "The Other End of Human–Robot Interaction". In Human–Robot Interaction, 137–70. Boca Raton, FL : CRC Press/Taylor & Francis Group, [2019]: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9781315213781-10.
Texto completo da fonteLőrincz, Márton. "Passive Bilateral Teleoperation with Safety Considerations". In Human–Robot Interaction, 171–86. Boca Raton, FL : CRC Press/Taylor & Francis Group, [2019]: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9781315213781-11.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Human-robot interaction"
Billings, Deborah R., Kristin E. Schaefer, Jessie Y. C. Chen e Peter A. Hancock. "Human-robot interaction". In the seventh annual ACM/IEEE international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2157689.2157709.
Texto completo da fonte"Human robot interaction". In 2016 9th International Conference on Human System Interactions (HSI). IEEE, 2016. http://dx.doi.org/10.1109/hsi.2016.7529627.
Texto completo da fonteSt-Onge, David, Nicolas Reeves e Nataliya Petkova. "Robot-Human Interaction". In HRI '17: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3029798.3034785.
Texto completo da fonteHan, Ji, Gopika Ajaykumar, Ze Li e Chien-Ming Huang. "Structuring Human-Robot Interactions via Interaction Conventions". In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 2020. http://dx.doi.org/10.1109/ro-man47096.2020.9223468.
Texto completo da fonte"Human, Robot and Interaction". In 2019 IEEE International Conference on Industrial Cyber Physical Systems (ICPS). IEEE, 2019. http://dx.doi.org/10.1109/icphys.2019.8780335.
Texto completo da fonteSandygulova, Anara, Abraham G. Campbell, Mauro Dragone e G. M. P. O'Hare. "Immersive human-robot interaction". In the seventh annual ACM/IEEE international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2157689.2157768.
Texto completo da fonteBudkov, V. Yu, M. V. Prischepa, A. L. Ronzhin e A. A. Karpov. "Multimodal human-robot interaction". In 2010 International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT 2010). IEEE, 2010. http://dx.doi.org/10.1109/icumt.2010.5676593.
Texto completo da fonteScimeca, Luca, Fumiya Iida, Perla Maiolino e Thrishantha Nanayakkara. "Human-Robot Medical Interaction". In HRI '20: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3371382.3374847.
Texto completo da fonte"Physical Human-Robot Interaction". In 2019 IEEE International Conference on Mechatronics (ICM). IEEE, 2019. http://dx.doi.org/10.1109/icmech.2019.8722848.
Texto completo da fonteBartneck, Christoph, e Jodi Forlizzi. "Shaping human-robot interaction". In Extended abstracts of the 2004 conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/985921.986205.
Texto completo da fonteRelatórios de organizações sobre o assunto "Human-robot interaction"
Arkin, Ronald C., e Lilia Moshkina. Affect in Human-Robot Interaction. Fort Belvoir, VA: Defense Technical Information Center, janeiro de 2014. http://dx.doi.org/10.21236/ada593747.
Texto completo da fonteMartinson, E., e W. Lawson. Learning Speaker Recognition Models through Human-Robot Interaction. Fort Belvoir, VA: Defense Technical Information Center, maio de 2011. http://dx.doi.org/10.21236/ada550036.
Texto completo da fonteManring, Levi H., John Monroe Pederson e Dillon Gabriel Potts. Improving Human-Robot Interaction and Control Through Augmented Reality. Office of Scientific and Technical Information (OSTI), agosto de 2018. http://dx.doi.org/10.2172/1467198.
Texto completo da fonteJiang, Shu, e Ronald C. Arkin. Mixed-Initiative Human-Robot Interaction: Definition, Taxonomy, and Survey. Fort Belvoir, VA: Defense Technical Information Center, janeiro de 2015. http://dx.doi.org/10.21236/ada620347.
Texto completo da fonteScholtz, Jean, Jeff Young, Holly A. Yanco e Jill L. Drury. Evaluation of Human-Robot Interaction Awareness in Search and Rescue. Fort Belvoir, VA: Defense Technical Information Center, janeiro de 2006. http://dx.doi.org/10.21236/ada456128.
Texto completo da fonteBagchi, Shelly, Murat Aksu, Megan Zimmerman, Jeremy A. Marvel, Brian Antonishek, Heni Ben Amor, Terry Fong, Ross Mead e Yue Wang. Workshop Report: Test Methods and Metrics for Effective HRI in Collaborative Human-Robot Teams, ACM/IEEE Human-Robot Interaction Conference, 2019. National Institute of Standards and Technology, dezembro de 2020. http://dx.doi.org/10.6028/nist.ir.8339.
Texto completo da fonteBagchi, Shelly, Jeremy A. Marvel, Megan Zimmerman, Murat Aksu, Brian Antonishek, Heni Ben Amor, Terry Fong, Ross Mead e Yue Wang. Workshop Report: Test Methods and Metrics for Effective HRI in Real-World Human-Robot Teams, ACM/IEEE Human-Robot Interaction Conference, 2020 (Virtual). National Institute of Standards and Technology, janeiro de 2021. http://dx.doi.org/10.6028/nist.ir.8345.
Texto completo da fonteSchaefer, Kristin E., Deborah R. Billings, James L. Szalma, Jeffrey K. Adams, Tracy L. Sanders, Jessie Y. Chen e Peter A. Hancock. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human-Robot Interaction. Fort Belvoir, VA: Defense Technical Information Center, julho de 2014. http://dx.doi.org/10.21236/ada607926.
Texto completo da fonteBagchi, Shelly, Jeremy A. Marvel, Megan Zimmerman, Murat Aksu, Brian Antonishek, Xiang Li, Heni Ben Amor, Terry Fong, Ross Mead e Yue Wang. Workshop Report: Novel and Emerging Test Methods and Metrics for Effective HRI, ACM/IEEE Conference on Human-Robot Interaction, 2021. National Institute of Standards and Technology, fevereiro de 2022. http://dx.doi.org/10.6028/nist.ir.8417.
Texto completo da fonteBreazael, Cynthia, e Brian Scassellati. Infant-Like Social Interactions Between a Robot and a Human Caregiver. Fort Belvoir, VA: Defense Technical Information Center, janeiro de 2006. http://dx.doi.org/10.21236/ada450357.
Texto completo da fonte