Journal articles on the topic 'Human robotics interaction spatial'

To see the other types of publications on this topic, follow the link: Human robotics interaction spatial.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Human robotics interaction spatial.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Alač, Morana, Javier Movellan, and Fumihide Tanaka. "When a robot is social: Spatial arrangements and multimodal semiotic engagement in the practice of social robotics." Social Studies of Science 41, no. 6 (October 5, 2011): 893–926. http://dx.doi.org/10.1177/0306312711420565.

Full text
Abstract:
Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot’s design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot’s design activity, and we argue that the robot’s social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot’s social agency is not simply controlled by individual will. Instead, the human–machine couplings are demanded by the situational dynamics in which the robot is lodged.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Jiali, Zuriahati Mohd Yunos, and Habibollah Haron. "Interactivity Recognition Graph Neural Network (IR-GNN) Model for Improving Human–Object Interaction Detection." Electronics 12, no. 2 (January 16, 2023): 470. http://dx.doi.org/10.3390/electronics12020470.

Full text
Abstract:
Human–object interaction (HOI) detection is important for promoting the development of many fields such as human–computer interactions, service robotics, and video security surveillance. A high percentage of human–object pairs with invalid interactions are discovered in the object detection phase of conventional human–object interaction detection algorithms, resulting in inaccurate interaction detection. To recognize invalid human–object interaction pairs, this paper proposes a model structure, the interactivity recognition graph neural network (IR-GNN) model, which can directly infer the probability of human–object interactions from a graph model architecture. The model consists of three modules: The first one is the human posture feature module, which uses key points of the human body to construct relative spatial pose features and further facilitates the discrimination of human–object interactivity through human pose information. Second, a human–object interactivity graph module is proposed. The spatial relationship of human–object distance is used as the initialization weight of edges, and the graph is updated by combining the message passing of attention mechanism so that edges with interacting node pairs obtain higher weights. Thirdly, the classification module is proposed; by finally using a fully connected neural network, the interactivity of human–object pairs is binarily classified. These three modules work in collaboration to enable the effective inference of interactive possibilities. On the datasets HICO-DET and V-COCO, comparative and ablation experiments are carried out. It has been proved that our technology can improve the detection of human–object interactions.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Jessie Y. C. "Individual Differences in Human-Robot Interaction in a Military Multitasking Environment." Journal of Cognitive Engineering and Decision Making 5, no. 1 (March 2011): 83–105. http://dx.doi.org/10.1177/1555343411399070.

Full text
Abstract:
A military vehicle crew station environment was simulated and a series of three experiments was conducted to examine the workload and performance of the combined position of the gunner and robotics operator in a multitasking environment. The study also evaluated whether aided target recognition (AiTR) capabilities (delivered through tactile and/or visual cuing) for the gunnery task might benefit the concurrent robotics and communication tasks and how the concurrent task performance might be affected when the AiTR was unreliable (i.e., false alarm prone or miss prone). Participants’ spatial ability was consistently found to be a reliable predictor of their targeting task performance as well as their modality preference for the AiTR display. Participants’ attentional control was found to significantly affect the way they interacted with unreliable automated systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Hüttenrauch, Helge, Elin A. Topp, and Kerstin Severinson-Eklundh. "The Art of Gate-Crashing." Interaction Studies 10, no. 3 (December 10, 2009): 274–97. http://dx.doi.org/10.1075/is.10.3.02hut.

Full text
Abstract:
Special purpose service robots have already entered the market and their users’ homes. Also the idea of the general purpose service robot or personal robot companion is increasingly discussed and investigated. To probe human–robot interaction with a mobile robot in arbitrary domestic settings, we conducted a study in eight different homes. Based on previous results from laboratory studies we identified particular interaction situations which should be studied thoroughly in real home settings. Based upon the collected sensory data from the robot we found that the different environments influenced the spatial management observable during our subjects’ interaction with the robot. We also validated empirically that the concept of spatial prompting can aid spatial management and communication, and assume this concept to be helpful for Human–Robot Interaction (HRI) design. In this article we report on our exploratory field study and our findings regarding, in particular, the spatial management observed during show episodes and movement through narrow passages. Keywords: COGNIRON, Domestic Service Robotics, Robot Field Trial, Human Augmented Mapping (HAM), Human–Robot Interaction (HRI), Spatial Management, Spatial Prompting
APA, Harvard, Vancouver, ISO, and other styles
5

Rieser, Verena, Matthew Walter, and Dirk Wollherr. "Special issue on spatial reasoning and interaction for real-world robotics." Advanced Robotics 31, no. 5 (January 31, 2017): 221. http://dx.doi.org/10.1080/01691864.2017.1281376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Lingfeng, Yancheng Wang, Deqing Mei, and Chengpeng Jiang. "Development of Fully Flexible Tactile Pressure Sensor with Bilayer Interlaced Bumps for Robotic Grasping Applications." Micromachines 11, no. 8 (August 12, 2020): 770. http://dx.doi.org/10.3390/mi11080770.

Full text
Abstract:
Flexible tactile sensors have been utilized in intelligent robotics for human-machine interaction and healthcare monitoring. The relatively low flexibility, unbalanced sensitivity and sensing range of the tactile sensors are hindering the accurate tactile information perception during robotic hand grasping of different objects. This paper developed a fully flexible tactile pressure sensor, using the flexible graphene and silver composites as the sensing element and stretchable electrodes, respectively. As for the structural design of the tactile sensor, the proposed bilayer interlaced bumps can be used to convert external pressure into the stretching of graphene composites. The fabricated tactile sensor exhibits a high sensing performance, including relatively high sensitivity (up to 3.40% kPa−1), wide sensing range (200 kPa), good dynamic response, and considerable repeatability. Then, the tactile sensor has been integrated with the robotic hand finger, and the grasping results have indicated the capability of using the tactile sensor to detect the distributed pressure during grasping applications. The grasping motions, properties of the objects can be further analyzed through the acquired tactile information in time and spatial domains, demonstrating the potential applications of the tactile sensor in intelligent robotics and human-machine interfaces.
APA, Harvard, Vancouver, ISO, and other styles
7

Kristoffersson, Annica, Silvia Coradeschi, Amy Loutfi, and Kerstin Severinson-Eklundh. "Assessment of interaction quality in mobile robotic telepresence." Interaction Studies 15, no. 2 (August 20, 2014): 343–57. http://dx.doi.org/10.1075/is.15.2.16kri.

Full text
Abstract:
In this paper, we focus on spatial formations when interacting via mobile robotic telepresence (MRP) systems. Previous research has found that those who used a MRP system to make a remote visit (pilot users) tended to use different spatial formations from what is typical in human-human interaction. In this paper, we present the results of a study where a pilot user interacted with ten elderly via a MRP system. Intentional deviations from known accepted spatial formations were made in order to study their effect on interaction quality from the local user perspective. Using a retrospective interviews technique, the elderly commented on the interaction and confirmed the importance of adhering to acceptable spatial configurations. The results show that there is a mismatch between pilot user behaviour and local user preference and that it is important to evaluate a MRP system from two perspectives, the pilot user’s and the local user’s. Keywords: F-formations; Mobile Robotic Telepresence; MRP systems; Quality of Interaction; Retrospective Interview; Spatial Formations; Spatial Configurations
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Jessie Y. C. "Concurrent Performance of Military and Robotics Tasks and Effects of Cueing in a Simulated Multi-Tasking Environment." Presence: Teleoperators and Virtual Environments 18, no. 1 (February 1, 2009): 1–15. http://dx.doi.org/10.1162/pres.18.1.1.

Full text
Abstract:
We simulated a military mounted crewstation environment and conducted two experiments to examine the workload and performance of the combined position of gunner and robotics operator. The robotics tasks involved managing a semi-autonomous ground robot or teleoperating a ground robot to conduct reconnaissance tasks. We also evaluated whether aided target recognition (AiTR) capabilities (delivered either through tactile or tactile + visual cueing) for the gunnery task might benefit the concurrent robotics and communication tasks. Results showed that participants' gunnery task performance degraded significantly when they had to concurrently monitor, manage, or teleoperate an unmanned ground vehicle compared to the gunnery-single task condition. When there was AiTR to assist them with their gunnery task, operators' concurrent performance of robotics and communication tasks improved significantly. However, there was a tendency for participants to over-rely on automation when task load was heavy, and performance degradations were observed in instances where automation failed to be entirely reliable. Participants' spatial ability was found to be a reliable predictor of robotics task performance, although the performance gap between those with higher and lower spatial ability appeared to be narrower when the AiTR was available to assist the gunnery task. Participants' perceived workload increased consistently as the concurrent task conditions became more challenging and when their gunnery task was unassisted. Individual difference factors such as spatial ability and perceived attentional control were found to correlate significantly with some of the performance measures. Implications for military personnel selection were discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Vörös, Viktor, Ruixuan Li, Ayoob Davoodi, Gauthier Wybaillie, Emmanuel Vander Poorten, and Kenan Niu. "An Augmented Reality-Based Interaction Scheme for Robotic Pedicle Screw Placement." Journal of Imaging 8, no. 10 (October 6, 2022): 273. http://dx.doi.org/10.3390/jimaging8100273.

Full text
Abstract:
Robot-assisted surgery is becoming popular in the operation room (OR) for, e.g., orthopedic surgery (among other surgeries). However, robotic executions related to surgical steps cannot simply rely on preoperative plans. Using pedicle screw placement as an example, extra adjustments are needed to adapt to the intraoperative changes when the preoperative planning is outdated. During surgery, adjusting a surgical plan is non-trivial and typically rather complex since the available interfaces used in current robotic systems are not always intuitive to use. Recently, thanks to technical advancements in head-mounted displays (HMD), augmented reality (AR)-based medical applications are emerging in the OR. The rendered virtual objects can be overlapped with real-world physical objects to offer intuitive displays of the surgical sites and anatomy. Moreover, the potential of combining AR with robotics is even more promising; however, it has not been fully exploited. In this paper, an innovative AR-based robotic approach is proposed and its technical feasibility in simulated pedicle screw placement is demonstrated. An approach for spatial calibration between the robot and HoloLens 2 without using an external 3D tracking system is proposed. The developed system offers an intuitive AR–robot interaction approach between the surgeon and the surgical robot by projecting the current surgical plan to the surgeon for fine-tuning and transferring the updated surgical plan immediately back to the robot side for execution. A series of bench-top experiments were conducted to evaluate system accuracy and human-related errors. A mean calibration error of 3.61 mm was found. The overall target pose error was 3.05 mm in translation and 1.12∘ in orientation. The average execution time for defining a target entry point intraoperatively was 26.56 s. This work offers an intuitive AR-based robotic approach, which could facilitate robotic technology in the OR and boost synergy between AR and robots for other medical applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Anand, Sarabjot Singh, Razvan C. Bunescu, Vitor R. Carvalho, Jan Chomicki, Vincent Conitzer, Michael T. Cox, Virginia Dignum, et al. "AAAI 2008 Workshop Reports." AI Magazine 30, no. 1 (January 18, 2009): 108. http://dx.doi.org/10.1609/aimag.v30i1.2196.

Full text
Abstract:
AAAI was pleased to present the AAAI-08 Workshop Program, held Sunday and Monday, July 13–14, in Chicago, Illinois, USA. The program included the following 15 workshops: Advancements in POMDP Solvers; AI Education Workshop Colloquium; Coordination, Organizations, Institutions, and Norms in Agent Systems, Enhanced Messaging; Human Implications of Human-Robot Interaction; Intelligent Techniques for Web Personalization and Recommender Systems; Metareasoning: Thinking about Thinking; Multidisciplinary Workshop on Advances in Preference Handling; Search in Artificial Intelligence and Robotics; Spatial and Temporal Reasoning; Trading Agent Design and Analysis; Transfer Learning for Complex Tasks; What Went Wrong and Why: Lessons from AI Research and Applications; and Wikipedia and Artificial Intelligence: An Evolving Synergy.
APA, Harvard, Vancouver, ISO, and other styles
11

Nan, Mihai, Mihai Trăscău, Adina Magda Florea, and Cezar Cătălin Iacob. "Comparison between Recurrent Networks and Temporal Convolutional Networks Approaches for Skeleton-Based Action Recognition." Sensors 21, no. 6 (March 15, 2021): 2051. http://dx.doi.org/10.3390/s21062051.

Full text
Abstract:
Action recognition plays an important role in various applications such as video monitoring, automatic video indexing, crowd analysis, human-machine interaction, smart homes and personal assistive robotics. In this paper, we propose improvements to some methods for human action recognition from videos that work with data represented in the form of skeleton poses. These methods are based on the most widely used techniques for this problem—Graph Convolutional Networks (GCNs), Temporal Convolutional Networks (TCNs) and Recurrent Neural Networks (RNNs). Initially, the paper explores and compares different ways to extract the most relevant spatial and temporal characteristics for a sequence of frames describing an action. Based on this comparative analysis, we show how a TCN type unit can be extended to work even on the characteristics extracted from the spatial domain. To validate our approach, we test it against a benchmark often used for human action recognition problems and we show that our solution obtains comparable results to the state-of-the-art, but with a significant increase in the inference speed.
APA, Harvard, Vancouver, ISO, and other styles
12

Kilicaslan, Yilmaz, and Gurkan Tuna. "An Nlp-Based Approach for Improving Human-Robot Interaction." Journal of Artificial Intelligence and Soft Computing Research 3, no. 3 (July 1, 2013): 189–200. http://dx.doi.org/10.2478/jaiscr-2014-0013.

Full text
Abstract:
Abstract This study aims to explore the possibility of improving human-robot interaction (HRI) by exploiting natural language resources and using natural language processing (NLP) methods. The theoretical basis of the study rests on the claim that effective and efficient human robot interaction requires linguistic and ontological agreement. A further claim is that the required ontology is implicitly present in the lexical and grammatical structure of natural language. The paper offers some NLP techniques to uncover (fragments of) the ontology hidden in natural language and to generate semantic representations of natural language sentences using that ontology. The paper also presents the implementation details of an NLP module capable of parsing English and Turkish along with an overview of the architecture of a robotic interface that makes use of this module for expressing the spatial motions of objects observed by a robot
APA, Harvard, Vancouver, ISO, and other styles
13

Sabharwal, Chaman L., and Jennifer L. Leopold. "Evolution of Region Connection Calculus to VRCC-3D+." New Mathematics and Natural Computation 10, no. 02 (June 3, 2014): 103–41. http://dx.doi.org/10.1142/s1793005714500069.

Full text
Abstract:
Qualitative spatial reasoning (QSR) is useful for deriving logical inferences when quantitative spatial information is not available. QSR theories have applications in areas such as geographic information systems, spatial databases, robotics, and cognitive sciences. The existing QSR theories have been applied primarily to 2D. The ability to perform QSR over a collection of 3D objects is desirable in many problem domains. Here we present the evolution (VRCC-3D+) of RCC-based QSR from 2D to both 3D (including occlusion support) and 4D (a temporal component). It is time consuming to construct large composition tables manually. We give a divide-and-conquer algorithm to construct a comprehensive composition table from smaller constituent tables (which can be easily handcrafted). In addition to the logical consistency entailment checking that is required for such a system, clearly there is a need for a spatio-temporal component to account for spatial movements and path consistency (i.e. to consider only smooth transitions in spatial movements over time). Visually, these smooth movement phenomena are represented as a conceptual neighborhood graph. We believe that the methods presented herein to detect consistency, refine uncertainty, and enhance reasoning about 3D objects will provide useful guidelines for other studies in automated spatial reasoning.
APA, Harvard, Vancouver, ISO, and other styles
14

Schmidt, Susanne, Oscar Ariza, and Frank Steinicke. "Intelligent Blended Agents: Reality–Virtuality Interaction with Artificially Intelligent Embodied Virtual Humans." Multimodal Technologies and Interaction 4, no. 4 (November 27, 2020): 85. http://dx.doi.org/10.3390/mti4040085.

Full text
Abstract:
Intelligent virtual agents (VAs) already support us in a variety of everyday tasks such as setting up appointments, monitoring our fitness, and organizing messages. Adding a humanoid body representation to these mostly voice-based VAs has enormous potential to enrich the human–agent communication process but, at the same time, raises expectations regarding the agent’s social, spatial, and intelligent behavior. Embodied VAs may be perceived as less human-like if they, for example, do not return eye contact, or do not show a plausible collision behavior with the physical surroundings. In this article, we introduce a new model that extends human-to-human interaction to interaction with intelligent agents and covers different multi-modal and multi-sensory channels that are required to create believable embodied VAs. Theoretical considerations of the different aspects of human–agent interaction are complemented by implementation guidelines to support the practical development of such agents. In this context, we particularly emphasize one aspect that is distinctive of embodied agents, i.e., interaction with the physical world. Since previous studies indicated negative effects of implausible physical behavior of VAs, we were interested in the initial responses of users when interacting with a VA with virtual–physical capabilities for the first time. We conducted a pilot study to collect subjective feedback regarding two forms of virtual–physical interactions. Both were designed and implemented in preparation of the user study, and represent two different approaches to virtual–physical manipulations: (i) displacement of a robotic object, and (ii) writing on a physical sheet of paper with thermochromic ink. The qualitative results of the study indicate positive effects of agents with virtual–physical capabilities in terms of their perceived realism as well as evoked emotional responses of the users. We conclude with an outlook on possible future developments of different aspects of human–agent interaction in general and the physical simulation in particular.
APA, Harvard, Vancouver, ISO, and other styles
15

Kanai, Satoshi, and Jouke C. Verlinden. "Special Issue on Augmented Prototyping and Fabrication for Advanced Product Design and Manufacturing." International Journal of Automation Technology 13, no. 4 (July 5, 2019): 451–52. http://dx.doi.org/10.20965/ijat.2019.p0451.

Full text
Abstract:
“Don’t automate, augment!” This is the takeaway of the seminal book on the future of work by Davenport and Kirby.*1 The emergence of cyber-physical systems makes radical new products and systems possible and challenges the role of humankind. Throughout the design, manufacturing, use, maintenance, and end-of-life stages, digital aspects (sensing, inferencing, connecting) influence the physical (digital fabrication, robotics) and vice versa. A key takeaway is that such innovations can augment human capabilities to extend our mental and physical skills with computational and robotic support – a notion called “augmented well-being.” Furthermore, agile development methods, complemented by mixed-reality systems and 3D-printing systems, enable us to create and adapt such systems on the fly, with almost instant turnaround times. Following this line of thought, our special issue is entitled “Augmented Prototyping and Fabrication for Advanced Product Design and Manufacturing.” Heavily inspired by the framework of Prof. Jun Rekimoto’s Augmented Human framework,*2 we can discern two orthogonal axes: cognitive versus physical and reflective versus active. As depicted in Fig. 1, this creates four different quadrants with important scientific domains that need to be juxtaposed. The contributions in this special issue are valuable steps towards this concept and are briefly discussed below. AR/VR To drive AR to the next level, robust tracking and tracing techniques are essential. The paper by Sumiyoshi et al. presents a new algorithm for object recognition and pose estimation in a strongly cluttered environment. As an example of how AR/VR can reshape human skills training, the development report of Komizunai et al. demonstrates an endotracheal suctioning simulator that establishes an optimized, spatial display with projector-based AR. Robotics/Cyborg Shor et al. present an augmentation display that uses haptics to go beyond the visual senses. The display has all the elements of a robotic system and is directly coupled to the human hand. In a completely different way, the article by Mitani et al. presents a development in soft robotics: a tongue simulator development (smart sensing and production of soft material), with a detailed account of the production and the technical performance. Finally, to consider novel human-robot interaction, human body tracking is essential. The system presented by Maruyama et al. introduces human motion capture based on IME, in this case the motion of cycling. Co-making Augmented well-being has to consider human-centered design and new collaborative environments where the stakeholders involved in whole product life-cycle work together to deliver better solutions. Inoue et al. propose a generalized decision-making scheme for universal design which considers anthropometric diversity. In the paper by Tanaka et al., paper inspection documents are electronically superimposed on 3D design models to enable design-inspection collaboration and more reliable maintenance activities for large-scale infrastructures. Artificial Intelligence Nakamura et al. propose an optimization-based search for interference-free paths and the poses of equipment in cluttered indoor environments, captured by interactive RGBD scans. AR-based guidance is provided to the user. Finally, the editors would like to express their gratitude to the authors for their exceptional contributions and to the anonymous reviewers for their devoted work. We expect that this special issue will encourage a new departure for research on augmented prototyping for product design and manufacturing. *1 T. H. Davenport and J. Kirby, “Only Humans Need Apply: Winners and Losers in the Age of Smart Machines,” Harper Business, 2016. *2 https://lab.rekimoto.org/about/ [Accessed June 21, 2019]
APA, Harvard, Vancouver, ISO, and other styles
16

Cancrini, Adriana, Paolo Baitelli, Matteo Lavit Nicora, Matteo Malosio, Alessandra Pedrocchi, and Alessandro Scano. "The effects of robotic assistance on upper limb spatial muscle synergies in healthy people during planar upper-limb training." PLOS ONE 17, no. 8 (August 8, 2022): e0272813. http://dx.doi.org/10.1371/journal.pone.0272813.

Full text
Abstract:
Background Robotic rehabilitation is a commonly adopted technique used to restore motor functionality of neurological patients. However, despite promising results were achieved, the effects of human-robot interaction on human motor control and the recovery mechanisms induced with robot assistance can be further investigated even on healthy subjects before translating to clinical practice. In this study, we adopt a standard paradigm for upper-limb rehabilitation (a planar device with assistive control) with linear and challenging curvilinear trajectories to investigate the effect of the assistance in human-robot interaction in healthy people. Methods Ten healthy subjects were instructed to perform a large set of radial and curvilinear movements in two interaction modes: 1) free movement (subjects hold the robot handle with no assistance) and 2) assisted movement (with a force tunnel assistance paradigm). Kinematics and EMGs from representative upper-limb muscles were recorded to extract phasic muscle synergies. The free and assisted interaction modes were compared assessing the level of assistance, error, and muscle synergy comparison between the two interaction modes. Results It was found that in free movement error magnitude is higher than with assistance, proving that task complexity required assistance also on healthy controls. Moreover, curvilinear tasks require more assistance than standard radial paths and error is higher. Interestingly, while assistance improved task performance, we found only a slight modification of phasic synergies when comparing assisted and free movement. Conclusions We found that on healthy people, the effect of assistance was significant on task performance, but limited on muscle synergies. The findings of this study can find applications for assessing human-robot interaction and to design training to maximize motor recovery.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Xiaozhi, Hongyan Li, and Mengjie Qian. "A Multimodal Information Fusion Model for Robot Action Recognition with Time Series." Journal of Electrical and Computer Engineering 2022 (June 16, 2022): 1–11. http://dx.doi.org/10.1155/2022/7270412.

Full text
Abstract:
The current robotics field, led by a new generation of information technology, is moving into a new stage of human-machine collaborative operation. Unlike traditional robots that need to use isolation rails to maintain a certain safety distance from people, the new generation of human-machine collaboration systems can work side by side with humans without spatial obstruction, giving full play to the expertise of people and machines through an intelligent assignment of operational tasks and improving work patterns to achieve increased efficiency. The robot’s efficient and accurate recognition of human movements has become a key factor in measuring robot performance. Usually, the data for action recognition is video data, and video data is time-series data. Time series describe the response results of a certain system at different times. Therefore, the study of time series can be used to recognize the structural characteristics of the system and reveal its operation law. As a result, this paper proposes a time series-based action recognition model with multimodal information fusion and applies it to a robot to realize friendly human-robot interaction. Multifeatures can characterize data information comprehensively, and in this study, the spatial flow and motion flow features of the dataset are extracted separately, and each feature is input into a bidirectional long and short-term memory network (BiLSTM). A confidence fusion method was used to obtain the final action recognition results. Experiment results on the publicly available datasets NTU-RGB + D and MSR Action 3D show that the method proposed in this paper can improve action recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Bao, Jie, Uldis Bojars, Ranzeem Choudhury, Li Ding, Mark Greaves, Ashish Kapoor, Sandy Louchart, et al. "Reports of the AAAI 2009 Spring Symposia." AI Magazine 30, no. 3 (July 7, 2009): 89. http://dx.doi.org/10.1609/aimag.v30i3.2253.

Full text
Abstract:
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, was pleased to present the 2009 Spring Symposium Series, held Monday through Wednesday, March 23–25, 2009 at Stanford University. The titles of the nine symposia were Agents that Learn from Human Teachers, Benchmarking of Qualitative Spatial and Temporal Reasoning Systems, Experimental Design for Real-World Systems, Human Behavior Modeling, Intelligent Event Processing, Intelligent Narrative Technologies II, Learning by Reading and Learning to Read, Social Semantic Web: Where Web 2.0 Meets Web 3.0, and Technosocial Predictive Analytics. The goal of the Agents that Learn from Human Teachers was to investigate how we can enable software and robotics agents to learn from real-time interaction with an everyday human partner. The aim of the Benchmarking of Qualitative Spatial and Temporal Reasoning Systems symposium was to initiate the development of a problem repository in the field of qualitative spatial and temporal reasoning and identify a graded set of challenges for future midterm and long-term research. The Experimental Design symposium discussed the challenges of evaluating AI systems. The Human Behavior Modeling symposium explored reasoning methods for understanding various aspects of human behavior, especially in the context of designing intelligent systems that interact with humans. The Intelligent Event Processing symposium discussed the need for more AI-based approaches in event processing and defined a kind of research agenda for the field, coined as intelligent complex event processing (iCEP). The Intelligent Narrative Technologies II AAAI symposium discussed innovations, progress, and novel techniques in the research domain. The Learning by Reading and Learning to Read symposium explored two aspects of making natural language texts semantically accessible to, and processable by, machines. The Social Semantic Web symposium focused on the real-world grand challenges in this area. Finally, the Technosocial Predictive Analytics symposium explored new methods for anticipatory analytical thinking that provide decision advantage through the integration of human and physical models.
APA, Harvard, Vancouver, ISO, and other styles
19

Vo, Viet Hoai, and Hoang Minh Pham. "Multiple Modal Features and Multiple Kernel Learning for Human Daily Activity Recognition." Science and Technology Development Journal 21, no. 2 (October 3, 2018): 52–63. http://dx.doi.org/10.32508/stdj.v21i2.441.

Full text
Abstract:
Introduction: Recognizing human activity in a daily environment has attracted much research in computer vision and recognition in recent years. It is a difficult and challenging topic not only inasmuch as the variations of background clutter, occlusion or intra-class variation in image sequences but also inasmuch as complex patterns of activity are created by interactions among people-people or people-objects. In addition, it also is very valuable for many practical applications, such as smart home, gaming, health care, human-computer interaction and robotics. Now, we are living in the beginning age of the industrial revolution 4.0 where intelligent systems have become the most important subject, as reflected in the research and industrial communities. There has been emerging advances in 3D cameras, such as Microsoft's Kinect and Intel's RealSense, which can capture RGB, depth and skeleton in real time. This creates a new opportunity to increase the capabilities of recognizing the human activity in the daily environment. In this research, we propose a novel approach of daily activity recognition and hypothesize that the performance of the system can be promoted by combining multimodal features. Methods: We extract spatial-temporal feature for the human body with representation of parts based on skeleton data from RGB-D data. Then, we combine multiple features from the two sources to yield the robust features for activity representation. Finally, we use the Multiple Kernel Learning algorithm to fuse multiple features to identify the activity label for each video. To show generalizability, the proposed framework has been tested on two challenging datasets by cross-validation scheme. Results: The experimental results show a good outcome on both CAD120 and MSR-Daily Activity 3D datasets with 94.16% and 95.31% in accuracy, respectively. Conclusion: These results prove our proposed methods are effective and feasible for activity recognition system in the daily environment.
APA, Harvard, Vancouver, ISO, and other styles
20

Jones, Lynette. "Dextrous Hands: Human, Prosthetic, and Robotic." Presence: Teleoperators and Virtual Environments 6, no. 1 (February 1997): 29–56. http://dx.doi.org/10.1162/pres.1997.6.1.29.

Full text
Abstract:
The sensory and motor capacities of the human hand are reviewed in the context of providing a set of performance characteristics against which prosthetic and dextrous robot hands can be evaluated. The sensors involved in processing tactile, thermal, and proprioceptive (force and movement) information are described, together with details on their spatial densities, sensitivity, and resolution. The wealth of data on the human hand's sensory capacities is not matched by an equivalent database on motor performance. Attempts at quantifying manual dexterity have met with formidable technological difficulties due to the conditions under which many highly trained manual skills are performed. Limitations in technology have affected not only the quantifying of human manual performance but also the development of prosthetic and robotic hands. Most prosthetic hands in use at present are simple grasping devices, and imparting a “natural” sense of touch to these hands remains a challenge. Several dextrous robot hands exist as research tools and even though some of these systems can outperform their human counterparts in the motor domain, they are still very limited as sensory processing systems. It is in this latter area that information from studies of human grasping and processing of object information may make the greatest contribution.
APA, Harvard, Vancouver, ISO, and other styles
21

Cody, Jason R., Karina A. Roundtree, and Julie A. Adams. "Human-Collective Collaborative Target Selection." ACM Transactions on Human-Robot Interaction 10, no. 2 (May 2021): 1–29. http://dx.doi.org/10.1145/3442679.

Full text
Abstract:
Robotic collectives are composed of hundreds or thousands of distributed robots using local sensing and communication that encompass characteristics of biological spatial swarms, colonies, or a combination of both. Interactions between the individual entities can result in emergent collective behaviors. Human operators in future disaster response or military engagement scenarios are likely to deploy semi-autonomous collectives to gather information and execute tasks within a wide area, while reducing the exposure of personnel to danger. This article presents and evaluates two action selection models in an experiment consisting of a single human operator supervising four simulated collectives. The action selection models have two parts: (1) a best-of- n decision-making model that attempts to choose the highest-quality target from a set of n targets and (2) a quorum sensing task sequencing model that enables autonomous target site occupation. An original biologically inspired insect colony decision model is compared to a bias-reducing model that attempts to reduce environmental bias, which can negatively influence collective best-of- n decisions when poorer-quality targets are easier to evaluate than higher-quality targets. The collective decision-making models are compared in both supervised and unsupervised trials. The bias-reducing model without human supervision is slower than the original model but is 57% more accurate for decisions where evaluating the optimal target is more difficult. Human-collective teams using the bias-reducing model require less operator influence and achieve 25% higher accuracy with difficult decisions compared to the teams using the original model.
APA, Harvard, Vancouver, ISO, and other styles
22

Muthu Mariappan H and Dr Gomathi V. "Indian Sign Language Recognition through Hybrid ConvNet-LSTM Networks." EMITTER International Journal of Engineering Technology 9, no. 1 (June 16, 2021): 182–203. http://dx.doi.org/10.24003/emitter.v9i1.613.

Full text
Abstract:
Dynamic hand gesture recognition is a challenging task of Human-Computer Interaction (HCI) and Computer Vision. The potential application areas of gesture recognition include sign language translation, video gaming, video surveillance, robotics, and gesture-controlled home appliances. In the proposed research, gesture recognition is applied to recognize sign language words from real-time videos. Classifying the actions from video sequences requires both spatial and temporal features. The proposed system handles the former by the Convolutional Neural Network (CNN), which is the core of several computer vision solutions and the latter by the Recurrent Neural Network (RNN), which is more efficient in handling the sequences of movements. Thus, the real-time Indian sign language (ISL) recognition system is developed using the hybrid CNN-RNN architecture. The system is trained with the proposed CasTalk-ISL dataset. The ultimate purpose of the presented research is to deploy a real-time sign language translator to break the hurdles present in the communication between hearing-impaired people and normal people. The developed system achieves 95.99% top-1 accuracy and 99.46% top-3 accuracy on the test dataset. The obtained results outperform the existing approaches using various deep models on different datasets.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Xinyu, and Xiaoqiang Li. "Dynamic Gesture Recognition Based on MEMP Network." Future Internet 11, no. 4 (April 3, 2019): 91. http://dx.doi.org/10.3390/fi11040091.

Full text
Abstract:
In recent years, gesture recognition has been used in many fields, such as games, robotics and sign language recognition. Human computer interaction (HCI) has been significantly improved by the development of gesture recognition, and now gesture recognition in video is an important research direction. Because each kind of neural network structure has its limitation, we proposed a neural network with alternate fusion of 3D CNN and ConvLSTM, which we called the Multiple extraction and Multiple prediction (MEMP) network. The main feature of the MEMP network is to extract and predict the temporal and spatial feature information of gesture video multiple times, which enables us to obtain a high accuracy rate. In the experimental part, three data sets (LSA64, SKIG and Chalearn 2016) are used to verify the performance of network. Our approach achieved high accuracy on those data sets. In the LSA64, the network achieved an identification rate of 99.063%. In SKIG, this network obtained the recognition rates of 97.01% and 99.02% in the RGB part and the rgb-depth part. In Chalearn 2016, the network achieved 74.57% and 78.85% recognition rates in RGB part and rgb-depth part respectively.
APA, Harvard, Vancouver, ISO, and other styles
24

Jin, Shiyu, Wenzhao Lian, Changhao Wang, Masayoshi Tomizuka, and Stefan Schaal. "Robotic Cable Routing with Spatial Representation." IEEE Robotics and Automation Letters 7, no. 2 (April 2022): 5687–94. http://dx.doi.org/10.1109/lra.2022.3158377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Koskinopoulou, Maria, Michail Maniadakis, and Panos Trahanias. "Speed Adaptation in Learning from Demonstration through Latent Space Formulation." Robotica 38, no. 10 (October 17, 2019): 1867–79. http://dx.doi.org/10.1017/s0263574719001449.

Full text
Abstract:
SUMMARYPerforming actions in a timely manner is an indispensable aspect in everyday human activities. Accordingly, it has to be present in robotic systems if they are going to seamlessly interact with humans. The current work addresses the problem of learning both the spatial and temporal characteristics of human motions from observation. We formulate learning as a mapping between two worlds (the observed and the action ones). This mapping is realized via an abstract intermediate representation termed “Latent Space.” Learned actions can be subsequently invoked in the context of more complex human–robot interaction (HRI) scenarios. Unlike previous learning from demonstration (LfD) methods that cope only with the spatial features of an action, the formulated scheme effectively encompasses spatial and temporal aspects. Learned actions are reproduced under the high-level control of a time-informed task planner. During the implementation of the studied scenarios, temporal and physical constraints may impose speed adaptations in the reproduced actions. The employed latent space representation readily supports such variations, giving rise to novel actions in the temporal domain. Experimental results demonstrate the effectiveness of the proposed scheme in the implementation of HRI scenarios. Finally, a set of well-defined evaluation metrics are introduced to assess the validity of the proposed approach considering the temporal and spatial consistency of the reproduced behaviors.
APA, Harvard, Vancouver, ISO, and other styles
26

Dehghani, Mohammad, and S. Ali A. Moosavian. "Compact modeling of spatial continuum robotic arms towards real-time control." Advanced Robotics 28, no. 1 (November 2013): 15–26. http://dx.doi.org/10.1080/01691864.2013.854452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Jaouedi, Neziha, Francisco J. Perales, José Maria Buades, Noureddine Boujnah, and Med Salim Bouhlel. "Prediction of Human Activities Based on a New Structure of Skeleton Features and Deep Learning Model." Sensors 20, no. 17 (September 1, 2020): 4944. http://dx.doi.org/10.3390/s20174944.

Full text
Abstract:
The recognition of human activities is usually considered to be a simple procedure. Problems occur in complex scenes involving high speeds. Activity prediction using Artificial Intelligence (AI) by numerical analysis has attracted the attention of several researchers. Human activities are an important challenge in various fields. There are many great applications in this area, including smart homes, assistive robotics, human–computer interactions, and improvements in protection in several areas such as security, transport, education, and medicine through the control of falling or aiding in medication consumption for elderly people. The advanced enhancement and success of deep learning techniques in various computer vision applications encourage the use of these methods in video processing. The human presentation is an important challenge in the analysis of human behavior through activity. A person in a video sequence can be described by their motion, skeleton, and/or spatial characteristics. In this paper, we present a novel approach to human activity recognition from videos using the Recurrent Neural Network (RNN) for activity classification and the Convolutional Neural Network (CNN) with a new structure of the human skeleton to carry out feature presentation. The aims of this work are to improve the human presentation through the collection of different features and the exploitation of the new RNN structure for activities. The performance of the proposed approach is evaluated by the RGB-D sensor dataset CAD-60. The experimental results show the performance of the proposed approach through the average error rate obtained (4.5%).
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Shiqiang, Qi Li, Duo He, Jinhua Wang, and Dexin Li. "Global Correlation Enhanced Hand Action Recognition Based on NST-GCN." Electronics 11, no. 16 (August 11, 2022): 2518. http://dx.doi.org/10.3390/electronics11162518.

Full text
Abstract:
Hand action recognition is an important part of intelligent monitoring, human–computer interaction, robotics and other fields. Compared with other methods, the hand action recognition method using skeleton information can ignore the error effects caused by complex background and movement speed changes, and the computational cost is relatively small. The spatial-temporal graph convolution networks (ST-GCN) model has excellent performance in the field of skeleton-based action recognition. In order to solve the problem of the root joint and the further joint not being closely connected, resulting in a poor hand-action-recognition effect, this paper firstly uses the dilated convolution to replace the standard convolution in the temporal dimension. This is in order to process the time series features of the hand action video, which increases the receptive field in the temporal dimension and enhances the connection between features. Then, by adding non-physical connections, the connection between the joints of the fingertip and the root of the finger is established, and a new partition strategy is adopted to strengthen the hand correlation of each joint point information. This helps to improve the network’s ability to extract the spatial-temporal features of the hand. The improved model is tested on public datasets and real scenarios. The experimental results show that compared with the original model, the 14-category top-1 and 28-category top-1 evaluation indicators of the dataset have been improved by 4.82% and 6.96%. In the real scene, the recognition effect of the categories with large changes in hand movements is better, and the recognition results of the categories with similar trends of hand movements are poor, so there is still room for improvement.
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Zewen, and Shahram Payandeh. "Toward Design of a Drip-Stand Patient Follower Robot." Journal of Robotics 2020 (March 9, 2020): 1–16. http://dx.doi.org/10.1155/2020/9080642.

Full text
Abstract:
A person following robot is an application of service robotics that primarily focuses on human-robot interaction, for example, in security and health care. This paper explores some of the design and development challenges of a patient follower robot. Our motivation stemmed from common mobility challenges associated with patients holding on and pulling the medical drip stand. Unlike other designs for person following robots, the proposed design objectives need to preserve as much as patient privacy and operational challenges in the hospital environment. We placed a single camera closer to the ground, which can result in a narrower field of view to preserve patient privacy. Through a unique design of artificial markers placed on various hospital clothing, we have shown how the visual tracking algorithm can determine the spatial location of the patient with respect to the robot. The robot control algorithm is implemented in three parts: (a) patient detection; (b) distance estimation; and (c) trajectory controller. For patient detection, the proposed algorithm utilizes two complementary tools for target detection, namely, template matching and colour histogram comparison. We applied a pinhole camera model for the estimation of distance from the robot to the patient. We proposed a novel movement trajectory planner to maintain the dynamic tipping stability of the robot by adjusting the peak acceleration. The paper further demonstrates the practicality of the proposed design through several experimental case studies.
APA, Harvard, Vancouver, ISO, and other styles
30

Almeida, Luis, Paulo Menezes, and Jorge Dias. "Interface Transparency Issues in Teleoperation." Applied Sciences 10, no. 18 (September 8, 2020): 6232. http://dx.doi.org/10.3390/app10186232.

Full text
Abstract:
Transferring skills and expertise to remote places, without being present, is a new challenge for our digitally interconnected society. People can experience and perform actions in distant places through a robotic agent wearing immersive interfaces to feel physically there. However, technological contingencies can affect human perception, compromising skill-based performances. Considering the results from studies on human factors, a set of recommendations for the construction of immersive teleoperation systems is provided, followed by an example of the evaluation methodology. We developed a testbed to study perceptual issues that affect task performance while users manipulated the environment either through traditional or immersive interfaces. The analysis of its effect on perception, navigation, and manipulation relies on performances measures and subjective answers. The goal is to mitigate the effect of factors such as system latency, field of view, frame of reference, or frame rate to achieve the sense of telepresence. By decoupling the flows of an immersive teleoperation system, we aim to understand how vision and interaction fidelity affects spatial cognition. Results show that misalignments between the frame of reference for vision and motor-action or the use of tools affecting the sense of body position or movement have a higher effect on mental workload and spatial cognition.
APA, Harvard, Vancouver, ISO, and other styles
31

Su, Yun-Peng, Xiao-Qi Chen, Tony Zhou, Christopher Pretty, and Geoffrey Chase. "Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System." Applied Sciences 12, no. 9 (May 8, 2022): 4740. http://dx.doi.org/10.3390/app12094740.

Full text
Abstract:
This paper presents an integrated mapping of motion and visualization scheme based on a Mixed Reality (MR) subspace approach for the intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control-feedback methods for the teleoperation system is validated and compared. The robotic arm-hand system consists of a 6 Degrees-of-Freedom (DOF) industrial manipulator and a low-cost 2-finger gripper, which can be manipulated in a natural manner by novice users physically distant from the working site. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time 3D visual feedback from the robot working site. Imitation-based velocity-centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control and enables spatial velocity-based control of the robot Tool Center Point (TCP). The user control space and robot working space are overlaid through the MR subspace, and the local user and a digital twin of the remote robot share the same environment in the MR subspace. The MR-based motion and visualization mapping scheme for telerobotics is compared to conventional 2D Baseline and MR tele-control paradigms over two tabletop object manipulation experiments. A user survey of 24 participants was conducted to demonstrate the effectiveness and performance enhancements enabled by the proposed system. The MR-subspace-integrated 3D mapping of motion and visualization scheme reduced the aggregate task completion time by 48% compared to the 2D Baseline module and 29%, compared to the MR SpaceMouse module. The perceived workload decreased by 32% and 22%, compared to the 2D Baseline and MR SpaceMouse approaches.
APA, Harvard, Vancouver, ISO, and other styles
32

Okuno, Hiroshi G., and Kazuhiro Nakadai. "Special Issue on Robot Audition Technologies." Journal of Robotics and Mechatronics 29, no. 1 (February 20, 2017): 15. http://dx.doi.org/10.20965/jrm.2017.p0015.

Full text
Abstract:
Robot audition, the ability of a robot to listen to several things at once with its own “ears,” is crucial to the improvement of interactions and symbiosis between humans and robots. Since robot audition was originally proposed and has been pioneered by Japanese research groups, this special issue on robot audition technologies of the Journal of Robotics and Mechatronics covers a wide collection of advanced topics studied mainly in Japan. Specifically, two consecutive JSPS Grants-in-Aid for Scientific Research (S) on robot audition (PI: Hiroshi G. Okuno) from 2007 to 2017, JST Japan-France Research Cooperative Program on binaural listening for humanoids (PI: Hiroshi G. Okuno and Patrick Danès) from 2009 to 2013, and the ImPACT Tough Robotics Challenge (PM: Prof. Satoshi Tadokoro) on extreme audition for search and rescue robots since 2015 have contributed to the promotion of robot audition research, and most of the papers in this issue are the outcome of these projects. Robot audition was surveyed in the special issue on robot audition in the Journal of Robotic Society of Japan, Vol.28, No.1 (2011) and in our IEEE ICASSP-2015 paper. This issue covers the most recent topics in robot audition, except for human-robot interactions, which was covered by many papers appearing in Advanced Robotics as well as other journals and international conferences, including IEEE IROS. This issue consists of twenty-three papers accepted through peer reviews. They are classified into four categories: signal processing, music and pet robots, search and rescue robots, and monitoring animal acoustics in natural habitats. In signal processing for robot audition, Nakadai, Okuno, et al. report on HARK open source software for robot audition, Takeda, et al. develop noise-robust MUSIC-sound source localization (SSL), and Yalta, et al. use deep learning for SSL. Odo, et al. develop active SSL by moving artificial pinnae, and Youssef, et al. propose binaural SSL for an immobile or mobile talker. Suzuki, Otsuka, et al. evaluate the influence of six impulse-response-measuring signals on MUSIC-based SSL, Sekiguchi, et al. give an optimal allocation of distributed microphone arrays for sound source separation, and Tanabe, et al. develop 3D SSL by using a microphone array and LiDAR. Nakadai and Koiwa present audio-visual automatic speech recognition, and Nakadai, Tezuka, et al. suppress ego-noise, that is, noise generated by the robot itself. In music and pet robots, Ohkita, et al. propose audio-visual beat tracking for a robot to dance with a human dancer, and Tomo, et al. develop a robot that operates a wayang puppet, an Indonesian world cultural heritage, by recognizing emotion in Gamelan music. Suzuki, Takahashi, et al. develop a pet robot that approaches a sound source. In search and rescue robots, Hoshiba, et al. implement real-time SSL with a microphone array installed on a multicopter UAV, and Ishiki, et al. design a microphone array for multicopters. Ohata, et al. detect a sound source with a multicopter microphone array, and Sugiyama, et al. identify detected acoustic events through a combination of signal processing and deep learning. Bando, et al. enhance the human-voice online and offline for a hose-shaped rescue robot with a microphone array. In monitoring animal acoustics in natural habitats, Suzuki, Matsubayashi, et al. design and implement HARKBird, Matsubayashi, et al. report on the experience of monitoring birds with HARKBird, and Kojima, et al. use a spatial-cue-based probabilistic model to analyze the songs of birds singing in their natural habitat. Aihara, et al. analyze a chorus of frogs with dozens of sound-to-light conversion device Firefly, the design and analysis of which is reported on by Mizumoto, et al. The editors and authors hope that this special issue will promote the further evolution of robot audition technologies in a diversity of applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Wong, Kok Wai, Tamás Gedeon, and Chun Che Fung. "Special Issue on Advances in Intelligent Data Processing." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 3 (March 20, 2007): 259–60. http://dx.doi.org/10.20965/jaciii.2007.p0259.

Full text
Abstract:
Technological advancement using intelligent techniques has provided solutions to many applications in diverse engineering disciplines. In application areas such as web mining, image processing, medical, and robotics, just one intelligent data processing technique may be inadequate for handling a task, and a combination or hybrid of intelligent data processing techniques becomes necessary. The sharp increase in activities in the development of innovative intelligent data processing technologies also attracted the interest of many researchers in applying intelligent data processing techniques in other application domains. In this special issue, we presented 12 research papers focusing on different aspects of intelligent data processing and its applications. We start with a paper entitled "An Activity Monitor Design Based on Wavelet Analysis and Wireless Sensor Networks," which focuses on using wavelet analysis and wireless sensor networks for monitoring the human physical condition. The second paper, "An Approach in Designing Hierarchy of Fuzzy Behaviors for Mobile Robot Navigation," presents a hierarchical approach using fuzzy theory to assist in the task of mobile robot navigation. It also discusses the design of hierarchical behavior of mobile robots using sensors. The third paper, "Toward Natural Communication: Human-Robot Gestural Interaction Using Pointing," also works with robots focusing more on the interaction between users and robots in which the robot recognizes pointing by a human user through intelligent data processing. The fourth paper, "Embodied Conversational Agents for H5N1 Pandemic Crisis," examines the use of intelligent software bots as an interaction tool for crisis communication. linebreaknewpage The work is based on a novel Automated Knowledge Extraction Agent (AKEA). There are many interests of using intelligent data processing techniques for image processing and analysis, as shown in the next few papers. The fifth paper, "A Feature Vector Approach for Inter-Query Learning for Content-Based Image Retrieval," presents relevance feedback based technique for content based image retrieval. It extends the relevance feedback approach to capture the inter-query relationship between current and previous queries. The sixth paper, "Abstract Image Generation Based on Local Similarity Pattern," also falls in the area of image retrieval using local similarity patterns to generate abstract images from a given set of images. Along the same line of similarity measure for image retrieval, the seventh paper, "Cross-Resolution Image Similarity Modeling," works on cross resolution using probabilistic and fuzzy theory to formulate cross resolution image similarity modeling. The eighth paper, "Bayesian Spatial Autoregressive for Reducing Blurring Effect in Image," presents a Bayesian Spatial Autoregressive technique developed by Geweke and LeSage. The ninth paper, "Logistic GMDH-Type Neural Network and its Application to Identification of X-Ray Film Characteristic Curve," presents a class of neural networks for X-Ray Film processing and compares results with some conventional techniques. As digital entertainment and games grow increasingly popular, the tenth paper, "Classification of Online Game Players Using Action Transition Probability and Kullback Leibler Entropy," looks into the use of intelligent data processing for classifying of online game players. The eleventh paper, "Parallel Learning Model and Topological Measurement for Self-Organizing Maps," presents the concept of a SOM parallel learning model that appears both robust and efficient. The twelfth paper, "Optimal Size Fuzzy Models," delineates concepts on how to make fuzzy systems more efficient. As guest editors for this issue, we thank the authors for their hard work. We also thank the reviewers for their assistance in the review process. All full papers submitted to this special issue have been peer-reviewed by at least two international reviewers in the area.
APA, Harvard, Vancouver, ISO, and other styles
34

Luo, Zhiqing, Mingxuan Yan, Wei Wang, and Qian Zhang. "Non-intrusive Anomaly Detection of Industrial Robot Operations by Exploiting Nonlinear Effect." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 4 (December 21, 2022): 1–27. http://dx.doi.org/10.1145/3569477.

Full text
Abstract:
With the development of Internet of Robotic Things concept, low-cost radio technologies open up many opportunities to facilitate the monitoring system of industrial robots, while the openness of wireless medium exposes robots to replay and man-in-the-middle attackers, who send pre-recorded movement data to mislead the system. Recent advances advocate the use of high-resolution sensors to monitor robot operations, which however require invasive retrofit to the robots. To overcome this predicament, we present RobotScatter, a non-intrusive system that exploits the nonlinear effect of RF circuits to fuse the propagation of backscatter tags attached to the robot to defend against active attacks. Specifically, the backscatter propagation interacted by the tags significantly depends on various movement operations, which can be captured with the nonlinearity at the receiver to uniquely determine its identity and the spatial movement trajectory. RobotScatter then profiles the robot movements to verify whether the received movement information matches the backscatter signatures, and thus detects the threat. We implement RobotScatter on two common robotic platforms, Universal Robot and iRobot Create, with over 1,500 operation cycles. The experiment results show that RobotScatter detects up to 94% of anomalies against small movement deviations of 10mm/s in velocity, and 2.6cm in distance.
APA, Harvard, Vancouver, ISO, and other styles
35

Sergeev, Sergey, Yuri Bubeev, Vitaly Usov, Mikhail Mikhaylyuk, Maxim Knyazkov, Alexey Polyakov, Anna Motienko, and Alexandr Khomyakov. "Virtual environments for modeling the interaction of operators with UAVs in closed spaces in potentially dangerous situations." Robotics and Technical Cybernetics 10, no. 2 (June 2022): 85–92. http://dx.doi.org/10.31776/rtcj.10201.

Full text
Abstract:
The effective interaction between humans and robotic devices is designed to provide a timely response to potentially dangerous situations arising in complex environments. One example is the use of Unmanned Aerial Vehicles (UAVs) by human operators to inspect enclosed spaces. These units allow the search and identification of target objects, helping a human operator identify signs of Potentially Dangerous Situations (PDSs). At the same time in a closed (and often cluttered) space to control the emerging situations it is necessary to involve developed means of conducting spatial orientation based on Artificial Intelligence Technology, Computer Vision, 3D-visualization, Local Positioning And Navigation, etc. Despite the progress made in autonomous UAVs control, active human participation in detecting signs of PDSs is an important condition for maintaining the security of the internal environment of controlled premises. Human decision-making in situations of high uncertainty and incomplete data largely depends on means to improve his Situational Awareness (external conditions of activity) and Professional Competencies (internal conditions of activity). To form them, it is necessary to develop methodology and tools for building UAVs application scenarios to give operators the necessary user experience on simulation models in the Virtual Environment of Activity. The paper investigated the development of the Virtual Environment Systems (VES) for the implementation of UAVs application scenarios in enclosed spaces in relation to multicopters that allow both autonomous and manual control. As a result of this research, prototypes of UAVs for inspection of enclosed spaces have been identified; the prospects of quadcopters use under various PDSs scenarios have been shown; the composition of promising technologies ensuring UAVs use in enclosed environments has been described; the method of design and 3D visualization of UAVs application scenarios using the original VES (named VirSim) has been proposed. The advantages of the proposed approach to modeling the use of UAVs in PDSs are the multivariate scenarios of operators' activities, which contributes to the accumulation of user experience of human interaction with flying robots.
APA, Harvard, Vancouver, ISO, and other styles
36

Buyruk, Yusuf, and Gülen Çağdaş. "Interactive Parametric Design and Robotic Fabrication within Mixed Reality Environment." Applied Sciences 12, no. 24 (December 13, 2022): 12797. http://dx.doi.org/10.3390/app122412797.

Full text
Abstract:
In this study, a method, in which parametric design and robotic fabrication are combined into one unified framework, and integrated within a mixed reality environment, where designers can interact with design and fabrication alternatives, and manage this process in collaboration with other designers, is proposed. To achieve this goal, the digital twin of both design and robotic fabrication steps was created within a mixed-reality environment. The proposed method was tested on a design product, which was defined with the shape-grammar method using parametric-modeling tools. In this framework, designers can interact with both design and robotic-fabrication parameters, and subsequent steps are generated instantly. Robotic fabrication can continue uninterrupted with human–robot collaboration. This study contributes to improving design and fabrication possibilities such as mass-customization, and shortens the process from design to production. The user experience and augmented spatial feedback provided by mixed reality are richer than the interaction with the computer screen. Since the whole process from parametric design to robotic fabrication can be controlled by parameters with hand gestures, the perception of reality is richer. The digital twin of parametric design and robotic fabrication is superimposed as holographic content by adding it on top of real-world images. Designers can interact with both design and fabrication processes both physically and virtually and can collaborate with other designers.
APA, Harvard, Vancouver, ISO, and other styles
37

St-Onge, David, Florent Levillain, Elisabetta Zibetti, and Giovanni Beltrame. "Collective expression: how robotic swarms convey information with group motion." Paladyn, Journal of Behavioral Robotics 10, no. 1 (December 31, 2019): 418–35. http://dx.doi.org/10.1515/pjbr-2019-0033.

Full text
Abstract:
AbstractWhen faced with the need of implementing a decentralized behavior for a group of collaborating robots, strategies inspired from swarm intelligence often avoid considering the human operator, granting the swarm with full autonomy. However, field missions require at least to share the output of the swarm to the operator. Unfortunately, little is known about the users’ perception of group behavior and dynamics, and there is no clear optimal interaction modality for swarms. In this paper, we focus on the movement of the swarm to convey information to a user: we believe that the interpretation of artificial states based on groups motion can lead to promising natural interaction modalities. We implement a grammar of decentralized control algorithms to explore their expressivity. We define the expressivity of a movement as a metric to measure how natural, readable, or easily understandable it may appear. We then correlate expressivity with the control parameters for the distributed behavior of the swarm. A first user study confirms the relationship between inter-robot distance, temporal and spatial synchronicity, and the perceived expressivity of the robotic system. We follow up with a small group of users tasked with the design of expressive motion sequences to convey internal states using our grammar of algorithms. We comment on their design choices and we assess the interpretation performance by a larger group of users. We show that some of the internal states were perceived as designed and discuss the parameters influencing the performance.
APA, Harvard, Vancouver, ISO, and other styles
38

Tipaldi, Gian Diego, and Kai Arras. "Planning Problems for Social Robots." Proceedings of the International Conference on Automated Planning and Scheduling 21 (March 22, 2011): 339–42. http://dx.doi.org/10.1609/icaps.v21i1.13481.

Full text
Abstract:
As robots enter environments that they share with people, human-aware planning and interaction become key tasks to be addressed. For doing so, robots need to reason about the places and times when and where humans are engaged into which activity and plan their actions accordingly. In this paper, we first address this issue by learning a nonhomogenous spatial Poisson process whose rate function encodes the occurrence probability of human activities in space and time. We then present two planning problems for human robot interaction in social environments. The first one is the maximum encounter probability planning problem, where a robot aims to find the path along which the probability of encountering a person is maximized. We are interested in two versions of this problem, with deadlines or with a certainty quota. The second one is the minimum interference coverage problem, where a robot performs a coverage task in a socially compatible way by reducing the hindrance or annoyance caused to people. An example is a noisy vacuum robot that has to cover the whole apartment having learned that at lunch time the kitchen is a bad place to clean. Formally, the problems are time dependent variants of known planning problems: MDPs and price collecting TSP for the first problem and the asymmetric TSP for the second. The challenge is that the cost functions of the arcs and nodes vary with time, and that execution time is more important that optimality, given the real-time constraints in robotic systems. We present experimental results using variants of known planners and formulate the problems as benchmarks to the community.
APA, Harvard, Vancouver, ISO, and other styles
39

Ananthanarayanan, S. P., A. A. Goldenberg, and J. Mylopoulos. "A qualitative theoretical framework for ‘common-sense’ based multiple contact robotic manipulation." Robotica 12, no. 2 (March 1994): 175–86. http://dx.doi.org/10.1017/s0263574700016751.

Full text
Abstract:
SUMMARYThis paper presents a qualitative theoretical formulation for synthesis and analysis of multiple contact dexterous manipulation of an object, using a robot hand. The motivation for a qualitative theory is to build a formalisation of ‘human-like’ common-sense reasoning in robotic manipulation. Using this formalisation, a robot hand can perform finger-tip manipulative movements by analysing the physical laws that govern the robot hand, the object, and their interaction. Traditionally, such analysis have been framed in quantitative terms leading to mathematical systems which become intractable very quickly. Also, quantitative synthesis and analysis, often demand an accurate specification of the parameters in the universe of discourse, which is almost impossible to provide. The qualitative approach inherently encounters both these problems successfully.The qualitative theory is presented in three developmental stages. A qualitative framework of spatial information in the context of dexterous manipulation has been provided. Qualitative models of an object configuration and transformations in them that occur during a manipulation process, have been developed. Finally, the development of a ‘quasi-static’ qualitative framework of a dexterous manipulation process that performs the desired object transformation, has been presented.
APA, Harvard, Vancouver, ISO, and other styles
40

Sun, Huanbo, Katherine J. Kuchenbecker, and Georg Martius. "A soft thumb-sized vision-based sensor with accurate all-round force perception." Nature Machine Intelligence 4, no. 2 (February 2022): 135–45. http://dx.doi.org/10.1038/s42256-021-00439-3.

Full text
Abstract:
AbstractVision-based haptic sensors have emerged as a promising approach to robotic touch due to affordable high-resolution cameras and successful computer vision techniques; however, their physical design and the information they provide do not yet meet the requirements of real applications. We present a robust, soft, low-cost, vision-based, thumb-sized three-dimensional haptic sensor named Insight, which continually provides a directional force-distribution map over its entire conical sensing surface. Constructed around an internal monocular camera, the sensor has only a single layer of elastomer over-moulded on a stiff frame to guarantee sensitivity, robustness and soft contact. Furthermore, Insight uniquely combines photometric stereo and structured light using a collimator to detect the three-dimensional deformation of its easily replaceable flexible outer shell. The force information is inferred by a deep neural network that maps images to the spatial distribution of three-dimensional contact force (normal and shear). Insight has an overall spatial resolution of 0.4 mm, a force magnitude accuracy of around 0.03 N and a force direction accuracy of around five degrees over a range of 0.03–2 N for numerous distinct contacts with varying contact area. The presented hardware and software design concepts can be transferred to a wide variety of robot parts.
APA, Harvard, Vancouver, ISO, and other styles
41

Wiethoff, Alexander, Marius Hoggenmueller, Beat Rossmy, Linda Hirsch, Luke Hespanhol, and Martin Tomitsch. "A Media Architecture Approach for Designing the Next Generation of Urban Interfaces." Interaction Design and Architecture(s), no. 48 (June 10, 2021): 9–32. http://dx.doi.org/10.55612/s-5002-048-001.

Full text
Abstract:
The augmentation of the built and urban environment with digital media has evolved and matured over recent years. Cities are seeing a rapid rise of various technologies; a trend also accelerated by global crises. Consequently, new urban interfaces are emerging that integrate next-generation technologies, such as sustainable interface materials and urban robotic systems. However, their development is primarily driven by technological concerns, leaving behind social, aesthetic, and spatial considerations. By analyzing our own media architecture research projects and real-world applications from the past two decades, we offer a structural approach for developing these new urban interfaces. The individual cases provide early insights and challenges related to prototyping and augmenting contexts with novel input and output modalities. These results in common, preliminary observed patterns in the process of integrating next-generation technologies into urban environments and surroundings, in response to continuously evolving social needs.
APA, Harvard, Vancouver, ISO, and other styles
42

Murphy, R. R. "Human–Robot Interaction in Rescue Robotics." IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 34, no. 2 (May 2004): 138–53. http://dx.doi.org/10.1109/tsmcc.2004.826267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hussain, Shahid, Prashant K. Jamwal, and Paulette Van Vliet. "Design synthesis and optimization of a 4-SPS intrinsically compliant parallel wrist rehabilitation robotic orthosis." Journal of Computational Design and Engineering 8, no. 6 (November 11, 2021): 1562–75. http://dx.doi.org/10.1093/jcde/qwab061.

Full text
Abstract:
Abstract Neuroplasticity allows the human nervous system to adapt and relearn motor control following stroke. Rehabilitation therapy, which enhances neuroplasticity, can be made more effective if assisted by robotic tools. In this paper, a novel 4-SPS parallel robot has been developed to provide recovery of wrist movements post-stroke. The novel mechanism presented here was inspired by the forearm anatomy and can provide the rotational degrees of freedom required for all wrist movements. The robot design has been discussed in detail along with the necessary constructional, kinematic, and static analyses. The spatial workspace of the robot is estimated considering various dimensional and application-specific constraints besides checking for singular configurations. The wrist robot has been further evaluated using important performance indices such as condition number, actuator forces, and stiffness. The pneumatic artificial muscles exhibit varying stiffness, and therefore, workspace points are reached with different overall stiffness of the robot. It is essential to assess robot workspace points that can be reached with positive forces in actuators while maintaining a positive definite overall stiffness matrix. After the above analysis, design optimization has been carried out using an evolutionary algorithm whereby three critical criteria are optimized simultaneously for optimal wrist robot design.
APA, Harvard, Vancouver, ISO, and other styles
44

Abdelaal, Alaa Eldin, Prateek Mathur, and Septimiu E. Salcudean. "Robotics In Vivo: A Perspective on Human–Robot Interaction in Surgical Robotics." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (May 3, 2020): 221–42. http://dx.doi.org/10.1146/annurev-control-091219-013437.

Full text
Abstract:
This article reviews recent work on surgical robots that have been used or tested in vivo, focusing on aspects related to human–robot interaction. We present the general design requirements that should be considered when developing such robots, including the clinical requirements and the technologies needed to satisfy them. We also discuss the human aspects related to the design of these robots, considering the challenges facing surgeons when using robots in the operating room, and the safety issues of such systems. We then survey recent work in seven different surgical settings: urology and gynecology, orthopedic surgery, cardiac surgery, head and neck surgery, neurosurgery, radiotherapy, and bronchoscopy. We conclude with the open problems and recommendations on how to move forward in this research area.
APA, Harvard, Vancouver, ISO, and other styles
45

Nakauchi, Yasushi. "Special Issue on Human Robot Interaction." Journal of Robotics and Mechatronics 14, no. 5 (October 20, 2002): 431. http://dx.doi.org/10.20965/jrm.2002.p0431.

Full text
Abstract:
Recent advances in robotics are disseminating robots into the social living environment as humanoids, pets, and caregivers. Novel human-robot interaction techniques and interfaces must be developed, however, to ensure that such robots interact as expected in daily life and work. Unlike conventional personal computers, such robots may assume a variety of configurations, such as industrial, wheel-based, ambulatory, remotely operated, autonomous, and wearable. They may also implement different communications modalities, including voice, video, haptics, and gestures. All of these aspects require that research on human-robot interaction become interdisciplinary, combining research from such fields as robotics, ergonomics, computer science and, psychology. In the field of computer science, new directions in human-computer interaction are emerging as post graphical user interfaces (GUIs). These include wearable, ubiquitous, and real-world computing. Such advances are thereby bridging the gap between robotics and computer science. The open-ended problems that potentially face include the following: What is the most desirable type of interaction between human beings and robots? What sort of technology will enable these interactions? How will human beings accept robots in their daily life and work? We are certain that readers of this special issue will be able to find many of the answers and become open to future directions concerning these problems. Any information that readers find herein will be a great pleasure to its editors.
APA, Harvard, Vancouver, ISO, and other styles
46

Kamesheva, S. B. "Interface development trends: social robotics and human-machine interaction." REPORTS ADYGE (CIRCASSIAN) INTERNATIONAL ACADEMY OF SCIENCES 20, no. 4 (2020): 25–30. http://dx.doi.org/10.47928/1726-9946-2020-20-4-25-30.

Full text
Abstract:
This article discusses the development of new technologies in the field of social robotics and humanmachine interaction interfaces. A comparative analysis was proposed about the availability levels of technologies in Russia and in the world. The consequences of the development and integration of social robotics in human life are considered.
APA, Harvard, Vancouver, ISO, and other styles
47

Kostjukov, V. A., M. Y. Medvedev, and V. Kh Pshikhopov. "Algorithms for Path Planning in a Group of Mobile Robots in an Environment with Obstacles with a Given Template." Mekhatronika, Avtomatizatsiya, Upravlenie 24, no. 1 (January 12, 2023): 33–45. http://dx.doi.org/10.17587/mau.24.33-45.

Full text
Abstract:
A method is proposed for solving the problem of planning the movement of a group of ground-based robotic platforms (UGR) with the requirement to maintain a given formation of the system in the presence of stationary obstacles and sources of disturbances. The task of calculating the trajectory of the leading UGR, coupled with the use of a displacement planner and subsequent smoothing of the resulting trajectory according to the method considered in the first part of this work, is highlighted. The trajectories of the slaved elements of the group are determined by constructing offset spatial curves along which these elements should move, taking into account a given configuration or the requirements of preserving some average kinematic parameters of the elements along their trajectory. To solve the problem of evading the group from the influence of sources of disturbances, the method considered in the previous works of the authors is proposed. It is based on the calculation of the probabilities of successful passage of the elements of the group of their trajectories. These probabilities can be found after evaluating the parameters of the characteristic probability functions of the sources describing the nature of their impact on moving objects over small time intervals. In this article, this method is modified by additional optimization of the resulting spatial trajectory along the length for each UGR, taking into account a given degree of permissible deviation from the original curve. A technique has been developed that allows to find the target trajectories of the leading and driven UGR of the group, the probability of successful passage of which exceeds the specified target value. The methodology is generalized to the case when the optimization criterion is the probability of successful completion of only part of the UGR group. Simulation results confirms the effectiveness of the proposed method of planning the trajectories of robots forming a group in the field of repeller sources.
APA, Harvard, Vancouver, ISO, and other styles
48

Gross, Mark D., and Keith Evan Green. "Architectural robotics, inevitably." Interactions 19, no. 1 (January 2012): 28–33. http://dx.doi.org/10.1145/2065327.2065335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tyler, Neil. "Human Robot Interactions." New Electronics 51, no. 22 (December 10, 2019): 12–14. http://dx.doi.org/10.12968/s0047-9624(22)61505-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Aspragathos, Nikos, Vassilis Moulianitis, and Panagiotis Koustoumpardis. "Special Issue on Human–Robot Interaction (HRI)." Robotica 38, no. 10 (October 2020): 1715–16. http://dx.doi.org/10.1017/s0263574720000946.

Full text
Abstract:
Human–robot interaction (HRI) is one of the most rapidly growing research fields in robotics and promising for the future of robotics technology. Despite the fact that numerous significant research results in HRI have been presented during the last years, there are still challenges in several critical topics of HRI, which could be summarized as: (i) collision and safety, (ii) virtual guides, (iii) cooperative manipulation, (iv) teleoperation and haptic interfaces, and (v) learning by observation or demonstration. In physical HRI research, the complementarity of the human and the robot capabilities is carefully considered for the advancement of their cooperation in a safe manner. New advanced control systems should be developed so the robot will acquire the ability to adapt easily to the human intentions and to the given task. The possible applications requiring co-manipulation are cooperative transportation of bulky and heavy objects, manufacturing processes such as assembly and surgery.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography