Academic literature on the topic 'Human robotics interaction spatial'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Human robotics interaction spatial.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Human robotics interaction spatial"

1

Alač, Morana, Javier Movellan, and Fumihide Tanaka. "When a robot is social: Spatial arrangements and multimodal semiotic engagement in the practice of social robotics." Social Studies of Science 41, no. 6 (October 5, 2011): 893–926. http://dx.doi.org/10.1177/0306312711420565.

Full text
Abstract:
Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot’s design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot’s design activity, and we argue that the robot’s social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot’s social agency is not simply controlled by individual will. Instead, the human–machine couplings are demanded by the situational dynamics in which the robot is lodged.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Jiali, Zuriahati Mohd Yunos, and Habibollah Haron. "Interactivity Recognition Graph Neural Network (IR-GNN) Model for Improving Human–Object Interaction Detection." Electronics 12, no. 2 (January 16, 2023): 470. http://dx.doi.org/10.3390/electronics12020470.

Full text
Abstract:
Human–object interaction (HOI) detection is important for promoting the development of many fields such as human–computer interactions, service robotics, and video security surveillance. A high percentage of human–object pairs with invalid interactions are discovered in the object detection phase of conventional human–object interaction detection algorithms, resulting in inaccurate interaction detection. To recognize invalid human–object interaction pairs, this paper proposes a model structure, the interactivity recognition graph neural network (IR-GNN) model, which can directly infer the probability of human–object interactions from a graph model architecture. The model consists of three modules: The first one is the human posture feature module, which uses key points of the human body to construct relative spatial pose features and further facilitates the discrimination of human–object interactivity through human pose information. Second, a human–object interactivity graph module is proposed. The spatial relationship of human–object distance is used as the initialization weight of edges, and the graph is updated by combining the message passing of attention mechanism so that edges with interacting node pairs obtain higher weights. Thirdly, the classification module is proposed; by finally using a fully connected neural network, the interactivity of human–object pairs is binarily classified. These three modules work in collaboration to enable the effective inference of interactive possibilities. On the datasets HICO-DET and V-COCO, comparative and ablation experiments are carried out. It has been proved that our technology can improve the detection of human–object interactions.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Jessie Y. C. "Individual Differences in Human-Robot Interaction in a Military Multitasking Environment." Journal of Cognitive Engineering and Decision Making 5, no. 1 (March 2011): 83–105. http://dx.doi.org/10.1177/1555343411399070.

Full text
Abstract:
A military vehicle crew station environment was simulated and a series of three experiments was conducted to examine the workload and performance of the combined position of the gunner and robotics operator in a multitasking environment. The study also evaluated whether aided target recognition (AiTR) capabilities (delivered through tactile and/or visual cuing) for the gunnery task might benefit the concurrent robotics and communication tasks and how the concurrent task performance might be affected when the AiTR was unreliable (i.e., false alarm prone or miss prone). Participants’ spatial ability was consistently found to be a reliable predictor of their targeting task performance as well as their modality preference for the AiTR display. Participants’ attentional control was found to significantly affect the way they interacted with unreliable automated systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Hüttenrauch, Helge, Elin A. Topp, and Kerstin Severinson-Eklundh. "The Art of Gate-Crashing." Interaction Studies 10, no. 3 (December 10, 2009): 274–97. http://dx.doi.org/10.1075/is.10.3.02hut.

Full text
Abstract:
Special purpose service robots have already entered the market and their users’ homes. Also the idea of the general purpose service robot or personal robot companion is increasingly discussed and investigated. To probe human–robot interaction with a mobile robot in arbitrary domestic settings, we conducted a study in eight different homes. Based on previous results from laboratory studies we identified particular interaction situations which should be studied thoroughly in real home settings. Based upon the collected sensory data from the robot we found that the different environments influenced the spatial management observable during our subjects’ interaction with the robot. We also validated empirically that the concept of spatial prompting can aid spatial management and communication, and assume this concept to be helpful for Human–Robot Interaction (HRI) design. In this article we report on our exploratory field study and our findings regarding, in particular, the spatial management observed during show episodes and movement through narrow passages. Keywords: COGNIRON, Domestic Service Robotics, Robot Field Trial, Human Augmented Mapping (HAM), Human–Robot Interaction (HRI), Spatial Management, Spatial Prompting
APA, Harvard, Vancouver, ISO, and other styles
5

Rieser, Verena, Matthew Walter, and Dirk Wollherr. "Special issue on spatial reasoning and interaction for real-world robotics." Advanced Robotics 31, no. 5 (January 31, 2017): 221. http://dx.doi.org/10.1080/01691864.2017.1281376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Lingfeng, Yancheng Wang, Deqing Mei, and Chengpeng Jiang. "Development of Fully Flexible Tactile Pressure Sensor with Bilayer Interlaced Bumps for Robotic Grasping Applications." Micromachines 11, no. 8 (August 12, 2020): 770. http://dx.doi.org/10.3390/mi11080770.

Full text
Abstract:
Flexible tactile sensors have been utilized in intelligent robotics for human-machine interaction and healthcare monitoring. The relatively low flexibility, unbalanced sensitivity and sensing range of the tactile sensors are hindering the accurate tactile information perception during robotic hand grasping of different objects. This paper developed a fully flexible tactile pressure sensor, using the flexible graphene and silver composites as the sensing element and stretchable electrodes, respectively. As for the structural design of the tactile sensor, the proposed bilayer interlaced bumps can be used to convert external pressure into the stretching of graphene composites. The fabricated tactile sensor exhibits a high sensing performance, including relatively high sensitivity (up to 3.40% kPa−1), wide sensing range (200 kPa), good dynamic response, and considerable repeatability. Then, the tactile sensor has been integrated with the robotic hand finger, and the grasping results have indicated the capability of using the tactile sensor to detect the distributed pressure during grasping applications. The grasping motions, properties of the objects can be further analyzed through the acquired tactile information in time and spatial domains, demonstrating the potential applications of the tactile sensor in intelligent robotics and human-machine interfaces.
APA, Harvard, Vancouver, ISO, and other styles
7

Kristoffersson, Annica, Silvia Coradeschi, Amy Loutfi, and Kerstin Severinson-Eklundh. "Assessment of interaction quality in mobile robotic telepresence." Interaction Studies 15, no. 2 (August 20, 2014): 343–57. http://dx.doi.org/10.1075/is.15.2.16kri.

Full text
Abstract:
In this paper, we focus on spatial formations when interacting via mobile robotic telepresence (MRP) systems. Previous research has found that those who used a MRP system to make a remote visit (pilot users) tended to use different spatial formations from what is typical in human-human interaction. In this paper, we present the results of a study where a pilot user interacted with ten elderly via a MRP system. Intentional deviations from known accepted spatial formations were made in order to study their effect on interaction quality from the local user perspective. Using a retrospective interviews technique, the elderly commented on the interaction and confirmed the importance of adhering to acceptable spatial configurations. The results show that there is a mismatch between pilot user behaviour and local user preference and that it is important to evaluate a MRP system from two perspectives, the pilot user’s and the local user’s. Keywords: F-formations; Mobile Robotic Telepresence; MRP systems; Quality of Interaction; Retrospective Interview; Spatial Formations; Spatial Configurations
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Jessie Y. C. "Concurrent Performance of Military and Robotics Tasks and Effects of Cueing in a Simulated Multi-Tasking Environment." Presence: Teleoperators and Virtual Environments 18, no. 1 (February 1, 2009): 1–15. http://dx.doi.org/10.1162/pres.18.1.1.

Full text
Abstract:
We simulated a military mounted crewstation environment and conducted two experiments to examine the workload and performance of the combined position of gunner and robotics operator. The robotics tasks involved managing a semi-autonomous ground robot or teleoperating a ground robot to conduct reconnaissance tasks. We also evaluated whether aided target recognition (AiTR) capabilities (delivered either through tactile or tactile + visual cueing) for the gunnery task might benefit the concurrent robotics and communication tasks. Results showed that participants' gunnery task performance degraded significantly when they had to concurrently monitor, manage, or teleoperate an unmanned ground vehicle compared to the gunnery-single task condition. When there was AiTR to assist them with their gunnery task, operators' concurrent performance of robotics and communication tasks improved significantly. However, there was a tendency for participants to over-rely on automation when task load was heavy, and performance degradations were observed in instances where automation failed to be entirely reliable. Participants' spatial ability was found to be a reliable predictor of robotics task performance, although the performance gap between those with higher and lower spatial ability appeared to be narrower when the AiTR was available to assist the gunnery task. Participants' perceived workload increased consistently as the concurrent task conditions became more challenging and when their gunnery task was unassisted. Individual difference factors such as spatial ability and perceived attentional control were found to correlate significantly with some of the performance measures. Implications for military personnel selection were discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Vörös, Viktor, Ruixuan Li, Ayoob Davoodi, Gauthier Wybaillie, Emmanuel Vander Poorten, and Kenan Niu. "An Augmented Reality-Based Interaction Scheme for Robotic Pedicle Screw Placement." Journal of Imaging 8, no. 10 (October 6, 2022): 273. http://dx.doi.org/10.3390/jimaging8100273.

Full text
Abstract:
Robot-assisted surgery is becoming popular in the operation room (OR) for, e.g., orthopedic surgery (among other surgeries). However, robotic executions related to surgical steps cannot simply rely on preoperative plans. Using pedicle screw placement as an example, extra adjustments are needed to adapt to the intraoperative changes when the preoperative planning is outdated. During surgery, adjusting a surgical plan is non-trivial and typically rather complex since the available interfaces used in current robotic systems are not always intuitive to use. Recently, thanks to technical advancements in head-mounted displays (HMD), augmented reality (AR)-based medical applications are emerging in the OR. The rendered virtual objects can be overlapped with real-world physical objects to offer intuitive displays of the surgical sites and anatomy. Moreover, the potential of combining AR with robotics is even more promising; however, it has not been fully exploited. In this paper, an innovative AR-based robotic approach is proposed and its technical feasibility in simulated pedicle screw placement is demonstrated. An approach for spatial calibration between the robot and HoloLens 2 without using an external 3D tracking system is proposed. The developed system offers an intuitive AR–robot interaction approach between the surgeon and the surgical robot by projecting the current surgical plan to the surgeon for fine-tuning and transferring the updated surgical plan immediately back to the robot side for execution. A series of bench-top experiments were conducted to evaluate system accuracy and human-related errors. A mean calibration error of 3.61 mm was found. The overall target pose error was 3.05 mm in translation and 1.12∘ in orientation. The average execution time for defining a target entry point intraoperatively was 26.56 s. This work offers an intuitive AR-based robotic approach, which could facilitate robotic technology in the OR and boost synergy between AR and robots for other medical applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Anand, Sarabjot Singh, Razvan C. Bunescu, Vitor R. Carvalho, Jan Chomicki, Vincent Conitzer, Michael T. Cox, Virginia Dignum, et al. "AAAI 2008 Workshop Reports." AI Magazine 30, no. 1 (January 18, 2009): 108. http://dx.doi.org/10.1609/aimag.v30i1.2196.

Full text
Abstract:
AAAI was pleased to present the AAAI-08 Workshop Program, held Sunday and Monday, July 13–14, in Chicago, Illinois, USA. The program included the following 15 workshops: Advancements in POMDP Solvers; AI Education Workshop Colloquium; Coordination, Organizations, Institutions, and Norms in Agent Systems, Enhanced Messaging; Human Implications of Human-Robot Interaction; Intelligent Techniques for Web Personalization and Recommender Systems; Metareasoning: Thinking about Thinking; Multidisciplinary Workshop on Advances in Preference Handling; Search in Artificial Intelligence and Robotics; Spatial and Temporal Reasoning; Trading Agent Design and Analysis; Transfer Learning for Complex Tasks; What Went Wrong and Why: Lessons from AI Research and Applications; and Wikipedia and Artificial Intelligence: An Evolving Synergy.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Human robotics interaction spatial"

1

Dondrup, Christian. "Human-robot spatial interaction using probabilistic qualitative representations." Thesis, University of Lincoln, 2016. http://eprints.lincoln.ac.uk/28665/.

Full text
Abstract:
Current human-aware navigation approaches use a predominantly metric representation of the interaction which makes them susceptible to changes in the environment. In order to accomplish reliable navigation in ever-changing human populated environments, the presented work aims to abstract from the underlying metric representation by using Qualitative Spatial Relations (QSR), namely the Qualitative Trajectory Calculus (QTC), for Human-Robot Spatial Interaction (HRSI). So far, this form of representing HRSI has been used to analyse different types of interactions online. This work extends this representation to be able to classify the interaction type online using incrementally updated QTC state chains, create a belief about the state of the world, and transform this high-level descriptor into low-level movement commands. By using QSRs the system becomes invariant to change in the environment, which is essential for any form of long-term deployment of a robot, but most importantly also allows the transfer of knowledge between similar encounters in different environments to facilitate interaction learning. To create a robust qualitative representation of the interaction, the essence of the movement of the human in relation to the robot and vice-versa is encoded in two new variants of QTC especially designed for HRSI and evaluated in several user studies. To enable interaction learning and facilitate reasoning, they are employed in a probabilistic framework using Hidden Markov Models (HMMs) for online classiffication and evaluation of their appropriateness for the task of human-aware navigation. In order to create a system for an autonomous robot, a perception pipeline for the detection and tracking of humans in the vicinity of the robot is described which serves as an enabling technology to create incrementally updated QTC state chains in real-time using the robot's sensors. Using this framework, the abstraction and generalisability of the QTC based framework is tested by using data from a different study for the classiffication of automatically generated state chains which shows the benefits of using such a highlevel description language. The detriment of using qualitative states to encode interaction is the severe loss of information that would be necessary to generate behaviour from it. To overcome this issue, so-called Velocity Costmaps are introduced which restrict the sampling space of a reactive local planner to only allow the generation of trajectories that correspond to the desired QTC state. This results in a exible and agile behaviour I generation that is able to produce inherently safe paths. In order to classify the current interaction type online and predict the current state for action selection, the HMMs are evolved into a particle filter especially designed to work with QSRs of any kind. This online belief generation is the basis for a exible action selection process that is based on data acquired using Learning from Demonstration (LfD) to encode human judgement into the used model. Thereby, the generated behaviour is not only sociable but also legible and ensures a high experienced comfort as shown in the experiments conducted. LfD itself is a rather underused approach when it comes to human-aware navigation but is facilitated by the qualitative model and allows exploitation of expert knowledge for model generation. Hence, the presented work bridges the gap between the speed and exibility of a sampling based reactive approach by using the particle filter and fast action selection, and the legibility of deliberative planners by using high-level information based on expert knowledge about the unfolding of an interaction.
APA, Harvard, Vancouver, ISO, and other styles
2

ERMACORA, GABRIELE. "Advances in Human Robot Interaction for Cloud Robotics applications." Doctoral thesis, Politecnico di Torino, 2016. http://hdl.handle.net/11583/2643059.

Full text
Abstract:
In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Reality
APA, Harvard, Vancouver, ISO, and other styles
3

Holthaus, Patrick [Verfasser]. "Approaching human-like spatial awareness in social robotics: an investigation of spatial interaction strategies with a receptionist robot / Patrick Holthaus." Bielefeld : Universitätsbibliothek Bielefeld, 2014. http://d-nb.info/1070981389/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Blisard, Samuel N. "Modeling spatial references for unoccupied spaces for human-robot interaction /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p1426048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dobnik, Simon. "Teaching mobile robots to use spatial words." Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:d3e8d606-212b-4a8e-ba9b-9c59cfd3f485.

Full text
Abstract:
The meaning of spatial words can only be evaluated by establishing a reference to the properties of the environment in which the word is used. For example, in order to evaluate what is to the left of something or how fast is fast in a given context, we need to evaluate properties such as the position of objects in the scene, their typical function and behaviour, the size of the scene and the perspective from which the scene is viewed. Rather than encoding the semantic rules that define spatial expressions by hand, we developed a system where such rules are learned from descriptions produced by human commentators and information that a mobile robot has about itself and its environment. We concentrate on two scenarios and words that are used in them. In the first scenario, the robot is moving in an enclosed space and the descriptions refer to its motion ('You're going forward slowly' and 'Now you're turning right'). In the second scenario, the robot is static in an enclosed space which contains real-size objects such as desks, chairs and walls. Here we are primarily interested in prepositional phrases that describe relationships between objects ('The chair is to the left of you' and 'The table is further away than the chair'). The perspective can be varied by changing the location of the robot. Following the learning stage, which is performed offline, the system is able to use this domain specific knowledge to generate new descriptions in new environments or to 'understand' these expressions by providing feedback to the user, either linguistically or by performing motion actions. If a robot can be taught to 'understand' and use such expressions in a manner that would seem natural to a human observer, then we can be reasonably sure that we have captured at least something important about their semantics. Two kinds of evaluation were performed. First, the performance of machine learning classifiers was evaluated on independent test sets using 10-fold cross-validation. A comparison of classifier performance (in regard to their accuracy, the Kappa coefficient (κ), ROC and Precision-Recall graphs) is made between (a) the machine learning algorithms used to build them, (b) conditions under which the learning datasets were created and (c) the method by which data was structured into examples or instances for learning. Second, with some additional knowledge required to build a simple dialogue interface, the classifiers were tested live against human evaluators in a new environment. The results show that the system is able to learn semantics of spatial expressions from low level robotic data. For example, a group of human evaluators judged that the live system generated a correct description of motion in 93.47% of cases (the figure is averaged over four categories) and that it generated the correct description of object relation in 59.28% of cases.
APA, Harvard, Vancouver, ISO, and other styles
6

Chadalavada, Ravi Teja. "Human Robot Interaction for Autonomous Systems in Industrial Environments." Thesis, Chalmers University of Technology, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-55277.

Full text
Abstract:
The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional Automatic Guided Vehicles (AGV), which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. This work addresses the problem of providing information regarding a service robot’s intention to humans co-populating the environment. The overall goal is to make humans feel safer and more comfortable, even when they are in close vicinity of the robot. A spatial Augmented Reality (AR) system for robot intention communication by means of projecting proxemic information onto shared floor space is developed on a robotic fork-lift by equipping it with a LED projector. This helps in visualizing internal state information and intents on the shared floors spaces. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. A Likert scalebased evaluation which also includes comparisons to human-human intention communication was performed. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot. This kind of synergistic human-robot interaction in a work environment is expected to increase the robot’s acceptability in the industry.
APA, Harvard, Vancouver, ISO, and other styles
7

Marin-Urias, Luis Felipe. "Planification et contrôle de mouvements en interaction avec l'homme. Reasoning about space for human-robot interaction." Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00468918.

Full text
Abstract:
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière expo-nentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain". Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes. Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux: - La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer differentes tâches d'interaction. - Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives.
APA, Harvard, Vancouver, ISO, and other styles
8

Sloan, Jared. "THE EFFECTS OF VIDEO FRAME DELAY AND SPATIAL ABILITY ON THE OPERATION OF MULTIPLE SEMIAUTONOMOUS AND TELE-OPERATED ROBOTS." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3734.

Full text
Abstract:
The United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation's wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for net-centric warfare is limited. The implication is that the FCS must be designed to leverage bandwidth in a manner that does not overtax computational resources. In this study alternatives for improving human performance during operation of teleoperated and semi-autonomous robots were examined. It was predicted that when operating both types of robots, frame delay of the semi-autonomous robot would improve performance because it would allow operators to concentrate on the constant workload imposed by the teleoperated while only allocating resources to the semi-autonomous during critical tasks. An additional prediction was that operators with high spatial ability would perform better than those with low spatial ability, especially when operating an aerial vehicle. The results can not confirm that frame delay has a positive effect on operator performance, though power may have been an issue, but clearly show that spatial ability is a strong predictor of performance on robotic asset control, particularly with aerial vehicles. In operating the UAV, the high spatial group was, on average, 30% faster, lazed 12% more targets, and made 43% more location reports than the low spatial group. The implications of this study indicate that system design should judiciously manage workload and capitalize on individual ability to improve performance and are relevant to system designers, especially in the military community.
M.S.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering and Management Systems
APA, Harvard, Vancouver, ISO, and other styles
9

Bitonneau, David. "Conception de systèmes cobotiques industriels : approche robotique avec prise en compte des facteurs humains : application à l'industrie manufacturière au sein de Safran et ArianeGroup." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0069/document.

Full text
Abstract:
La cobotique est un domaine émergeant qui offre de nouvelles perspectives pour améliorer la performance des entreprises et la santé des hommes au travail, en alliant l'expertise et les capacités cognitives des opérateurs aux atouts des robots. Dans cette thèse la cobotique est positionnée comme le domaine de la collaboration homme-robot. Nous définissons les systèmes cobotiques comme des systèmes au sein desquels l'homme et le robot interagissent pour réaliser une tâche commune.Cette thèse d'ingénierie robotique a été réalisée en binôme avec Théo Moulières-Seban, doctorant en cognitique. Ces deux thèses Cifre ont été menées avec Safran et ArianeGroup qui ont reconnu la cobotique comme stratégique pour le développement de leur compétitivité. Pour étudier et développer les systèmes cobotiques, nous avons proposé conjointement une approche méthodologique interdisciplinaire appliquée à l'industrie et validée par nos encadrants académiques. Cette approche offre une place centrale à l'intégration des futurs utilisateurs dans la conception, à travers l'analyse de leur activité de travail et la réalisation de simulations participatives. Nous avons déployé cette démarche pour répondre à différents besoins industriels concrets chez ArianeGroup.Dans cette thèse, nous détaillons la conception d'un système cobotique pour améliorer la santé et la sécurité des opérateurs sur le poste de nettoyage des cuves de propergol. Les opérations réalisées sur ce poste sont difficiles physiquement et présentent un risque pyrotechnique. Conjointement avec l'équipe projet ArianeGroup, nous avons proposé un système cobotique de type téléopération pour conserver l'expertise des opérateurs tout en les plaçant en sécurité pendant la réalisation des opérations pyrotechniques. Cette solution est en cours d'industrialisation dans la perspective de la production du propergol des fusées Ariane.L'application de notre démarche d'ingénierie des systèmes cobotiques sur une variété de postes de travail et de besoins industriels nous a permis de l'enrichir avec des outils opérationnels pour guider la conception. Nous prévoyons que la cobotique soit une des clés pour replacer l'homme au cœur des moyens de production dans le cadre de l'Usine du futur. Réciproquement, l'intégration des opérateurs dans les projets de conception sera déterminante pour assurer la performance et l'acceptation des futurs systèmes cobotiques
Human Robot Collaboration provides new perspectives to improve companies' performance and operators' working conditions, by bringing together workers expertise and adaptation capacity with robots' power and precision. In this research, we introduce the concept of "cobotic system", in which humans and robots -- with possibly different roles -- interact, sharing a common purpose of solving a task.This robotic engineering PhD thesis has been completed as a team with the cognitive engineer Théo Moulières-Seban. Both PhD thesis were conducted under the leadership of Safran and ArianeGroup, which have recognized Human Robot Collaboration has strategic for their industrial performance. Together, we proposed the "cobotic system engineering": a cross-disciplinary approach for cobotic system design. This approach was applied to several industrial needs within ArianeGroup.In this thesis, we detail the design of a cobotic system to improve operators' health and safety on the "tank cleaning" workstation. We have proposed a teleoperation cobotic system to keep operators' expertise while placing them in a safe place to conduct operations. This solution is now under an industrialization phase for the production of Ariane launch vehicles.We argue that thanks to their flexibility, their connectivity to modern workshops' technological ecosystem and their ability to take humans into account, cobotic systems will be one of the key parts composing the Industry 4.0
APA, Harvard, Vancouver, ISO, and other styles
10

Sanan, Siddharth. "Soft Inflatable Robots for Safe Physical Human Interaction." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/303.

Full text
Abstract:
Robots that can operate in human environments in a safe and robust manner would be of great benefit to society, due to their immense potential for providing assistance to humans. However, robots have seen limited application outside of the industrial setting in environments such as homes and hospitals. We believe a very important factor preventing the cross over of robotic technology from the factory to the house is the issue of safety. The safety issue is usually bypassed in the industrial setting by separation of human and robot workspaces. Such a solution is clearly infeasible for robots that provide assistance to humans. This thesis aims to develop intrinsically safe robots that are suitable for providing assistance to humans. We believe intrinsic safety is important in physical human robot interaction because unintended interactions will occur between humans and robots due to: (a) sharing of workspace, (b) hardware failure (computer crashes, actuator failures), (c) limitations on perception, and (d) limitations on cognition. When such unintended interactions are very fast (collisions), they are beyond the bandwidth limits of practical controllers and only the intrinsic safety characteristics of the system govern the interaction forces that occur. The effects of such interactions with traditional robots could range from persistent discomfort to bone fracture to even serious injuries. Therefore robots that serve in the application domain of human assistance should be able to function with a high tolerance for unintended interactions. This calls for a new design paradigm where operational safety is the primary concern and task accuracy/precision though important are secondary. In this thesis, we address this new design paradigm by developing robots that have a soft inflatable structure, i.e, inflatable robots. Inflatable robots can improve intrinsic safety characteristics by being extremely lightweight and by including surface compliance (due to the compressibility of air) as well as distributed structural compliance (due to the lower Young’s modulus of the materials used) in the structure. This results in a lower effective inertia during collisions which implies a lower impact force between the inflatable robot and human. Inflatable robots can essentially be manufactured like clothes and can therefore also potentially lower the cost of robots to an extent where personal robots can be an affordable reality. In this thesis, we present a number of inflatable robot prototypes to address challenges in the area of design and control of such systems. Specific areas addressed are: structural and joint design, payload capacity, pneumatic actuation, state estimation and control. The CMU inflatable arm is used in tasks like wiping and feeding a human to successfully demonstrate the use of inflatable robots for tasks involving close physical human interaction.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Human robotics interaction spatial"

1

Kerstin, Schill, Uttal David, and SpringerLink (Online service), eds. Spatial Cognition VIII: International Conference, Spatial Cognition 2012, Kloster Seeon, Germany, August 31 – September 3, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mansour, Rahimi, and Karwowski Waldemar 1953-, eds. Human-robot interaction. London: Taylor & Francis, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Goodrich, Michael A. Human-robot interaction: A survey. Hanover: Now Publishers, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wyeld, Theodor, Paul Calder, and Haifeng Shen, eds. Computer-Human Interaction. Cognitive Effects of Spatial Interaction, Learning, and Ability. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16940-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

New frontiers in human-robot interaction. Philadelphia: John Benjamins Pub., 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zacarias, Marielba. Human-Computer Interaction: The Agency Perspective. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

1951-, Gregory Derek, and Urry John, eds. Social relations and spatial structures. New York: St. Martin's Press, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

1951-, Gregory Derek, and Urry John, eds. Social relations and spatial structures. Basingstroke, Hampshire: Macmillan, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Berns, Karsten, Syed Atif Mehdi, and Muhammad Abubakr. Field and assistive robotics: Advances in systems and algorithms. Aachen: Shaker Verlag, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thrift, N. J. Spatial formations. London: Sage, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Human robotics interaction spatial"

1

Pandey, Amit Kumar, and Rachid Alami. "Mightability: A Multi-state Visuo-spatial Reasoning for Human-Robot Interaction." In Experimental Robotics, 49–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-28572-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bellotto, Nicola, Marc Hanheide, and Nico Van de Weghe. "Qualitative Design and Implementation of Human-Robot Spatial Interactions." In Social Robotics, 331–40. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02675-6_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Castri, Luca, Sariah Mghames, Marc Hanheide, and Nicola Bellotto. "Causal Discovery of Dynamic Models for Predicting Human Spatial Interactions." In Social Robotics, 154–64. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-24667-8_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Ming, Na Dong, Qimeng Tan, Bixi Yan, and Jingyi Zhao. "Research on Autonomous Face Recognition System for Spatial Human-Robotic Interaction Based on Deep Learning." In Intelligent Robotics and Applications, 131–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27541-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Roberts-Elliott, Laurence, Manuel Fernandez-Carmona, and Marc Hanheide. "Towards Safer Robot Motion: Using a Qualitative Motion Model to Classify Human-Robot Spatial Interaction." In Towards Autonomous Robotic Systems, 249–60. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63486-5_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ohnishi, Kouhei. "Human–Robot Interaction." In Mechatronics and Robotics, 255–64. Boca Raton : CRC Press, 2020.: CRC Press, 2020. http://dx.doi.org/10.1201/9780429347474-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haddadin, Sami. "Physical Human-Robot Interaction." In Encyclopedia of Robotics, 1–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-642-41610-1_26-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Belpaeme, Tony. "Social Human-Robot Interaction." In Encyclopedia of Robotics, 1–5. Berlin, Heidelberg: Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-642-41610-1_31-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nelson, Bradley. "Session 5: Human Robot Interaction." In Experimental Robotics, 189–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00196-3_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sidobre, Daniel, Xavier Broquère, Jim Mainprice, Ernesto Burattini, Alberto Finzi, Silvia Rossi, and Mariacarla Staffa. "Human–Robot Interaction." In Springer Tracts in Advanced Robotics, 123–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29041-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Human robotics interaction spatial"

1

"A SPATIAL ONTOLOGY FOR HUMAN-ROBOT INTERACTION." In 7th International Conference on Informatics in Control, Automation and Robotics. SciTePress - Science and and Technology Publications, 2010. http://dx.doi.org/10.5220/0002948001540159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sisbot, Emrah Akin, Luis F. Marin, and Rachid Alami. "Spatial reasoning for human robot interaction." In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2007. http://dx.doi.org/10.1109/iros.2007.4399486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nagi, Jawad, Alessandro Giusti, Luca M. Gambardella, and Gianni A. Di Caro. "Human-swarm interaction using spatial gestures." In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6943101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pandey, Amit Kumar, Muhammad Ali, Matthieu Warnier, and Rachid Alami. "Towards multi-state visuo-spatial reasoning based proactive human-robot interaction." In 2011 15th International Conference on Advanced Robotics (ICAR 2011). IEEE, 2011. http://dx.doi.org/10.1109/icar.2011.6088642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Khodr, Hala, Ulysse Ramage, Kevin Kim, Arzu Guneysu Ozgur, Barbara Bruno, and Pierre Dillenbourg. "Being Part of the Swarm: Experiencing Human-Swarm Interaction with VR and Tangible Robots." In SUI '20: Symposium on Spatial User Interaction. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3385959.3422695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huettenrauch, Helge, Kerstin Eklundh, Anders Green, and Elin Topp. "Investigating Spatial Relationships in Human-Robot Interaction." In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006. http://dx.doi.org/10.1109/iros.2006.282535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guadarrama, Sergio, Lorenzo Riano, Dave Golland, Daniel Gouhring, Yangqing Jia, Dan Klein, Pieter Abbeel, and Trevor Darrell. "Grounding spatial relations for human-robot interaction." In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013). IEEE, 2013. http://dx.doi.org/10.1109/iros.2013.6696569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yoon, Sungboo, Yeseul Kim, Changbum Ahn, and Moonseo Park. "Challenges in Deictic Gesture-Based Spatial Referencing for Human-Robot Interaction in Construction." In 38th International Symposium on Automation and Robotics in Construction. International Association for Automation and Robotics in Construction (IAARC), 2021. http://dx.doi.org/10.22260/isarc2021/0067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dinh Quang Huy, I. Vietcheslav, and Gerald Seet Gim Lee. "See-through and spatial augmented reality - a novel framework for human-robot interaction." In 2017 3rd International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2017. http://dx.doi.org/10.1109/iccar.2017.7942791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Parker, Chris A. C., and Elizabeth Croft. "J-Strips: Haptic Joint Limit Warnings for Human-Robot Interaction." In ASME 2010 International Mechanical Engineering Congress and Exposition. ASMEDC, 2010. http://dx.doi.org/10.1115/imece2010-40717.

Full text
Abstract:
The capabilities of humans and robots naturally compliment each other. Humans excel at spatial problem solving and fine manipulation tasks, whereas robots are good at supporting and stabilizing heavy loads. However, the typical impedance control strategy employed in this domain does not communicate any of a robot’s underlying physical constraints to its user. In this work, we propose j-strips, an anthromimetic haptic cue designed to convey stress in a robot manipulator to its human user. We hypothesize that when warned via j-strips that the robot is nearing a joint limit in this way, the human user will modify the path of the robot to avoid the limit. We present the results of a pilot study of three human subjects manipulating a robot arm that uses j-strips to warn its user when its elbow position limit is approached. Two of the subjects were found to significantly modify the manner in which they manipulated the robot, and both verbally reported that j-strips conveyed the intended message.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Human robotics interaction spatial"

1

Pomranky, Regina A. Human Robotics Interaction Army Technology Objective Raven Small Unmanned Aerial Vehicle Task Analysis and Modeling. Fort Belvoir, VA: Defense Technical Information Center, January 2006. http://dx.doi.org/10.21236/ada476904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography