To see the other types of publications on this topic, follow the link: Motion-Capturing, human motion.

Journal articles on the topic 'Motion-Capturing, human motion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Motion-Capturing, human motion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cheng, Zhiqing, Anthony Ligouri, Ryan Fogle, and Timothy Webb. "Capturing Human Motion in Natural Environments." Procedia Manufacturing 3 (2015): 3828–35. http://dx.doi.org/10.1016/j.promfg.2015.07.886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hsu, Shih-Chung, Jun-Yang Huang, Wei-Chia Kao, and Chung-Lin Huang. "Human body motion parameters capturing using kinect." Machine Vision and Applications 26, no. 7-8 (September 21, 2015): 919–32. http://dx.doi.org/10.1007/s00138-015-0710-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Adelsberger, Rolf. "Capturing human motion one step at a time." XRDS: Crossroads, The ACM Magazine for Students 20, no. 2 (December 2013): 38–42. http://dx.doi.org/10.1145/2538692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Sang Il, and Jessica K. Hodgins. "Capturing and animating skin deformation in human motion." ACM Transactions on Graphics 25, no. 3 (July 2006): 881–89. http://dx.doi.org/10.1145/1141911.1141970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Abson, Karl, and Ian Palmer. "Motion capture: capturing interaction between human and animal." Visual Computer 31, no. 3 (March 22, 2014): 341–53. http://dx.doi.org/10.1007/s00371-014-0929-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lim, Chee Kian, Zhiqiang Luo, I.-Ming Chen, and Song Huat Yeo. "Wearable wireless sensing system for capturing human arm motion." Sensors and Actuators A: Physical 166, no. 1 (March 2011): 125–32. http://dx.doi.org/10.1016/j.sna.2010.10.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Dong, Zhi Qi Guo, Mei Hui Wang, and Chuan Lv. "Human Factors Assessment Based on the Technology of Human Motion Capturing." Applied Mechanics and Materials 44-47 (December 2010): 532–36. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.532.

Full text
Abstract:
We aim to combine the technology of capturing human motion and the technology of virtual reality to carry on assessment of human factors. The unique point in this method is that not only the reliable data of the maintenance worker can be gained, but also the quantitative analytic result based on the virtual environment can be obtained. In the paper, human motion capture technology, ergonomics evaluation and the interface technology have been considered comprehensively. overall technical program of human factors evaluation, which is based on human motion capturing technology, have been carried on; the technology, which include the captured data of human motion translating into the virtual environment, building the virtual human model and virtual human simulation, both based on captured data in the working site, are taken as innovations; replicable technology of the captured data in the virtual environment have been broken through. Carrying on the quantitative analysis of worker working postures, fatigue and human force and torque in the maintenance process, which is based on the technology of human factors evaluation by using the captured data in the working site, is researched. We have verified the feasibility of this technology through an example. The method provides a new way and operational technology for human factors assessment in maintenance process of aviation equipment.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Song Shan, and Yan Qing Qi. "A Virtual Human's Driving Method Based on Motion Capture Data." Advanced Materials Research 711 (June 2013): 500–505. http://dx.doi.org/10.4028/www.scientific.net/amr.711.500.

Full text
Abstract:
This article gives a method about driving a virtual human by motion capture data in software Jack. Firstly simplify Jack's skeleton model according to the skeleton of capturing data BVH. Secondly set up an Euler angle rotation equation to mapping joint angles between BVH and Jack. Finally, program the method and give an example to show that it is available to improve Jacks human motion simulating by the human capturing data.
APA, Harvard, Vancouver, ISO, and other styles
9

Su, Hai Long, and Da Wei Zhang. "Study on Error Compensation of Human Motion Analysis System." Applied Mechanics and Materials 48-49 (February 2011): 1149–53. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.1149.

Full text
Abstract:
Human motion capture system based on a new kind of error compensation technology was developed and used this to assess human motion. On the basis of three-dimensional reconstruction and some essential factors influencing on the measurement accuracy, the measurement error theory and the modified and improved human motion analysis system was established. The experimental data indicate that the measurement precision of the modified and improved system is more precise than the original system on human motion measurement and capturing.
APA, Harvard, Vancouver, ISO, and other styles
10

Alexiadis, Dimitrios S., Anargyros Chatzitofis, Nikolaos Zioulis, Olga Zoidi, Georgios Louizis, Dimitrios Zarpalas, and Petros Daras. "An Integrated Platform for Live 3D Human Reconstruction and Motion Capturing." IEEE Transactions on Circuits and Systems for Video Technology 27, no. 4 (April 2017): 798–813. http://dx.doi.org/10.1109/tcsvt.2016.2576922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rosell, Jan, Raúl Suárez, Néstor García, and Muhayy Ud Din. "Planning Grasping Motions for Humanoid Robots." International Journal of Humanoid Robotics 16, no. 06 (December 2019): 1950041. http://dx.doi.org/10.1142/s0219843619500415.

Full text
Abstract:
This paper addresses the problem of obtaining the required motions for a humanoid robot to perform grasp actions trying to mimic the coordinated hand–arm movements humans do. The first step is the data acquisition and analysis, which consists in capturing human movements while grasping several everyday objects (covering four possible grasp types), mapping them to the robot and computing the hand motion synergies for the pre-grasp and grasp phases (per grasp type). Then, the grasp and motion synthesis step is done, which consists in generating potential grasps for a given object using the four family types, and planning the motions using a bi-directional multi-goal sampling-based planner, which efficiently guides the motion planning following the synergies in a reduced search space, resulting in paths with human-like appearance. The approach has been tested in simulation, thoroughly compared with other state-of-the-art planning algorithms obtaining better results, and also implemented in a real robot.
APA, Harvard, Vancouver, ISO, and other styles
12

XU, WEIWEI, ZHIGENG PAN, and MINGMIN ZHANG. "FOOTPRINT SAMPLING-BASED MOTION EDITING." International Journal of Image and Graphics 03, no. 02 (April 2003): 311–24. http://dx.doi.org/10.1142/s0219467803001020.

Full text
Abstract:
In this paper, we present a motion editing algorithm for the human biped locomotion captured by a motion capturing device. Our algorithm adopts footprints to describe the space-time constraints which should be satisfied during biped locomotion. The footprints are also used as an interface to enable the user to control the space-time constraints directly. A real-time Inverse Kinematics (IK) solver is adapted to compute the configuration of the human body and motion displacement mapping is then constructed using hierarchical B-spline. In order to facilitate the IK solver, we propose a sampling-based scheme to generate root trajectory. Hermit interpolation is then employed to generate the whole root trajectory. This scheme provides a speedup to root trajectory generation. The performance of our algorithm is further enhanced by the real-time IK solver, which directly computes the displacement angles as solution.
APA, Harvard, Vancouver, ISO, and other styles
13

Panaite, Arun Fabian, Monica Leba, Marius Leonard Olar, Remus Constantin Sibisanu, and Lilla Pellegrini. "Human Arm Motion Capture using Gyroscopic Sensors." MATEC Web of Conferences 343 (2021): 08007. http://dx.doi.org/10.1051/matecconf/202134308007.

Full text
Abstract:
By using the most rudimentary microcontroller chips, that receive data from sensors, and transmit the data to a computer system, thorough a virtual serial port, motion of many objects, bodies and joints can be captured. Capturing the motion and reproducing it live is not the only destination for the data usage. Recording and studying the motion data, can reduce a lot of work in a wide range of domains. Using the simplest methods to capture the data, also means making it so widely accessible for learning, editing and also developing systems that use very little processing power, granting data access for the less efficient computers. We propose using the MPU-6050 MEMS sensor in a dual instance, and the Arduino UNO microcontroller, connected to a computer for data acquisition, to capture the motion of a human arm, and reproduce it in a projected environment. Other experiments, conducted by other researchers and developers have used a higher number of sensors, and the data acquisition and recording systems were much more complex, but our research reduced the number of sensors to just two. One of the high impact innovations brought by this system, in particular, is that we’ve virtually hooked the end of one sensor to the tip of the other, creating a virtual motion chain.
APA, Harvard, Vancouver, ISO, and other styles
14

Shi, Guang Tian, and Shuai Li. "Study on the Application of UWB Positioning Technology in Multiplayer Mechanical Motion Capture System." Applied Mechanics and Materials 416-417 (September 2013): 1341–45. http://dx.doi.org/10.4028/www.scientific.net/amm.416-417.1341.

Full text
Abstract:
Because the mechanical motion capture system can only capture the motion data of the human body and it can`t achieve the positioning function in three-dimensional space, therefore, it is only suitable for capturing just one person's motion data. This paper will introduce the UWB positioning technology to the multiplayer mechanical motion capture system, the system through the mechanical motion capture technology to get the motion data of performers and using UWB positioning technique to obtain the coordinate data of each performer, then through the integration and calculation of the above two kinds of data, the system will acquire the complete motion data of each performer.
APA, Harvard, Vancouver, ISO, and other styles
15

Aminian, Kamiar, and Bijan Najafi. "Capturing human motion using body-fixed sensors: outdoor measurement and clinical applications." Computer Animation and Virtual Worlds 15, no. 2 (April 5, 2004): 79–94. http://dx.doi.org/10.1002/cav.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

KIM, JUNG-YUP, and YOUNG-SEOG KIM. "HUMAN-LIKE GAIT GENERATION FOR BIPED ANDROID ROBOT USING MOTION CAPTURE AND ZMP MEASUREMENT SYSTEM." International Journal of Humanoid Robotics 07, no. 04 (December 2010): 511–34. http://dx.doi.org/10.1142/s0219843610002155.

Full text
Abstract:
This article proposes a novel strategy to generate both a human-like walking pattern and a human-like zero moment point (ZMP) trajectory for a biped android robot. In general, the motion-capture technique has been widely utilized to obtain a walking pattern that is kinematically similar to the walking of a human. However, in addition to kinematic considerations, a suitable ZMP shaping technique is necessary to apply the human gait derived by motion capturing to biped robots more effectively. In previous research by the authors, a walking pattern generation strategy was developed considering the kinematics using motion capturing and Fourier fitting. However, it was found that there were differences between the calculated ZMP trajectory of the earlier research and the measured ZMP trajectory directly derived from the sensor in this research. Therefore, the differences and their factors are analyzed and a new strategy is proposed that effectively reduces the differences between them. Finally, the proposed strategy is shown to be effective for generating human-like walking pattern and ZMP trajectory for biped android robots through stick figure simulations.
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Jihong, and Insoo Ha. "Real-Time Motion Capture for a Human Body using Accelerometers." Robotica 19, no. 6 (September 2001): 601–10. http://dx.doi.org/10.1017/s0263574701003319.

Full text
Abstract:
In this paper we propose a set of techniques for a real-time motion capture of a human body. The proposed motion capture system is based on low cost accelerometers, and is capable of identifying the body configuration by extracting gravity-related terms from the sensor data. One sensor unit is composed of 3 accelerometers arranged orthogonally to each other, and is capable of identifying 2 rotating angles of joints with 2 degrees of freedom. A geometric fusion technique is applied to cope with the uncertainty of sensor data. A practical calibration technique is also proposed to handle errors in aligning the sensing axis to the coordination axis. In the case where motion acceleration is not negligible compared with gravity acceleration, a compensation technique to extract gravity acceleration from the sensor data is proposed. Experimental results not only for individual techniques but also for human motion capturing with graphics are included.
APA, Harvard, Vancouver, ISO, and other styles
18

Fu, Qiang, Xingui Zhang, Jinxiu Xu, and Haimin Zhang. "Capture of 3D Human Motion Pose in Virtual Reality Based on Video Recognition." Complexity 2020 (November 20, 2020): 1–17. http://dx.doi.org/10.1155/2020/8857748.

Full text
Abstract:
Motion pose capture technology can effectively solve the problem of difficulty in defining character motion in the process of 3D animation production and greatly reduce the workload of character motion control, thereby improving the efficiency of animation development and the fidelity of character motion. Motion gesture capture technology is widely used in virtual reality systems, virtual training grounds, and real-time tracking of the motion trajectories of general objects. This paper proposes an attitude estimation algorithm adapted to be embedded. The previous centralized Kalman filter is divided into two-step Kalman filtering. According to the different characteristics of the sensors, they are processed separately to isolate the cross-influence between sensors. An adaptive adjustment method based on fuzzy logic is proposed. The acceleration, angular velocity, and geomagnetic field strength of the environment are used as the input of fuzzy logic to judge the motion state of the carrier and then adjust the covariance matrix of the filter. The adaptive adjustment of the sensor is converted to the recognition of the motion state. For the study of human motion posture capture, this paper designs a verification experiment based on the existing robotic arm in the laboratory. The experiment shows that the studied motion posture capture method has better performance. The human body motion gesture is designed for capturing experiments, and the capture results show that the obtained pose angle information can better restore the human body motion. A visual model of human motion posture capture was established, and after comparing and analyzing with the real situation, it was found that the simulation approach reproduced the motion process of human motion well. For the research of human motion recognition, this paper designs a two-classification model and human daily behaviors for experiments. Experiments show that the accuracy of the two-category human motion gesture capture and recognition has achieved good results. The experimental effect of SVC on the recognition of two classifications is excellent. In the case of using all optimization algorithms, the accuracy rate is higher than 90%, and the final recognition accuracy rate is also higher than 90%. In terms of recognition time, the time required for human motion gesture capture and recognition is less than 2 s.
APA, Harvard, Vancouver, ISO, and other styles
19

Dewangga, Sandy Akbar, Handayani Tjandrasa, and Darlis Herumurti. "Robot Motion Control Using the Emotiv EPOC EEG System." Bulletin of Electrical Engineering and Informatics 7, no. 2 (June 1, 2018): 279–85. http://dx.doi.org/10.11591/eei.v7i2.678.

Full text
Abstract:
Brain-computer interfaces have been explored for years with the intent of using human thoughts to control mechanical system. By capturing the transmission of signals directly from the human brain or electroencephalogram (EEG), human thoughts can be made as motion commands to the robot. This paper presents a prototype for an electroencephalogram (EEG) based brain-actuated robot control system using mental commands. In this study, Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) method were combined to establish the best model. Dataset containing features of EEG signals were obtained from the subject non-invasively using Emotiv EPOC headset. The best model was then used by Brain-Computer Interface (BCI) to classify the EEG signals into robot motion commands to control the robot directly. The result of the classification gave the average accuracy of 69.06%.
APA, Harvard, Vancouver, ISO, and other styles
20

Mahieu, P., P. J. De Roo, G. T. Gomes, E. Audenaert, L. De Wilde, and R. Verdonk. "Motion capturing devices for assessment of upper limb kinematics: a comparative study." Computer Methods in Biomechanics and Biomedical Engineering 10, sup1 (January 2007): 143–44. http://dx.doi.org/10.1080/10255840701479388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Chen, Anna Wang, Chunguang Bu, Wenhui Wang, and Haijing Sun. "Human Motion Tracking with Less Constraint of Initial Posture from a Single RGB-D Sensor." Sensors 21, no. 9 (April 26, 2021): 3029. http://dx.doi.org/10.3390/s21093029.

Full text
Abstract:
High-quality and complete human motion 4D reconstruction is of great significance for immersive VR and even human operation. However, it has inevitable self-scanning constraints, and tracking under monocular settings also has strict restrictions. In this paper, we propose a human motion capture system combined with human priors and performance capture that only uses a single RGB-D sensor. To break the self-scanning constraint, we generated a complete mesh only using the front view input to initialize the geometric capture. In order to construct a correct warping field, most previous methods initialize their systems in a strict way. To maintain high fidelity while increasing the easiness of the system, we updated the model while capturing motion. Additionally, we blended in human priors in order to improve the reliability of model warping. Extensive experiments demonstrated that our method can be used more comfortably while maintaining credible geometric warping and remaining free of self-scanning constraints.
APA, Harvard, Vancouver, ISO, and other styles
22

Hülsken, Frank, Christian Eckes, Roland Kuck, Jörg Unterberg, and Sophie J�rg. "Modeling and Animating Virtual Humans for Real-Time Applications." International Journal of Virtual Reality 6, no. 4 (January 1, 2007): 11–20. http://dx.doi.org/10.20870/ijvr.2007.6.4.2704.

Full text
Abstract:
We report on the workflow for the creation of realistic virtual anthropomorphic characters. 3D-models of human heads have been reconstructed from real people by following a structured light approach to 3D-reconstruction. We describe how these high-resolution models have been simplified and articulated with blend shape and mesh skinning techniques to ensure real-time animation. The full-body models have been created manually based on photographs. We present a system for capturing whole body motions, including the fingers, based on an optical motion capture system with 6 DOF rigid bodies and cybergloves. The motion capture data was processed in one system, mapped to a virtual character and visualized in real-time. We developed tools and methods for quick post processing. To demonstrate the viability of our system, we captured a library consisting of more than 90 gestures.
APA, Harvard, Vancouver, ISO, and other styles
23

Deng, Didan, Zhaokang Chen, Yuqian Zhou, and Bertram Shi. "MIMAMO Net: Integrating Micro- and Macro-Motion for Video Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2621–28. http://dx.doi.org/10.1609/aaai.v34i03.5646.

Full text
Abstract:
Spatial-temporal feature learning is of vital importance for video emotion recognition. Previous deep network structures often focused on macro-motion which extends over long time scales, e.g., on the order of seconds. We believe integrating structures capturing information about both micro- and macro-motion will benefit emotion prediction, because human perceive both micro- and macro-expressions. In this paper, we propose to combine micro- and macro-motion features to improve video emotion recognition with a two-stream recurrent network, named MIMAMO (Micro-Macro-Motion) Net. Specifically, smaller and shorter micro-motions are analyzed by a two-stream network, while larger and more sustained macro-motions can be well captured by a subsequent recurrent network. Assigning specific interpretations to the roles of different parts of the network enables us to make choice of parameters based on prior knowledge: choices that turn out to be optimal. One of the important innovations in our model is the use of interframe phase differences rather than optical flow as input to the temporal stream. Compared with the optical flow, phase differences require less computation and are more robust to illumination changes. Our proposed network achieves state of the art performance on two video emotion datasets, the OMG emotion dataset and the Aff-Wild dataset. The most significant gains are for arousal prediction, for which motion information is intuitively more informative. Source code is available at https://github.com/wtomin/MIMAMO-Net.
APA, Harvard, Vancouver, ISO, and other styles
24

Sziebig, Gábor, Péter Zanaty, Péter Korondi, and Bjørn Solvang. "Cog Framework - 3D Visualization for Mobile Robot Teleoperation." Advanced Materials Research 222 (April 2011): 357–61. http://dx.doi.org/10.4028/www.scientific.net/amr.222.357.

Full text
Abstract:
A multi-layer mobile robot controller unit has been created and tested successfully to be able to work with different types of mobile robot agents. The paper presents a modular extensible system which is relying on top of modern open source libraries. The system handles a motion capturing suit and adapts it similarly as traditional peripheries. Robust posture recognition has been introduced on top of the motion suit adapter, which is used to instruct a mobile robot agent, while immerse stereographic feedback is provided to the human operator.
APA, Harvard, Vancouver, ISO, and other styles
25

Yokota, Hiroki, Munekazu Naito, Naoki Mizuno, and Shigemichi Ohshima. "Framework for visual-feedback training based on a modified self-organizing map to imitate complex motion." Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology 234, no. 1 (September 6, 2019): 49–58. http://dx.doi.org/10.1177/1754337119872405.

Full text
Abstract:
The goal of this research was to develop a visual-feedback system, based on motion sensing and computational technologies, to help athletes and patients imitate desired motor skills. To accomplish this objective, the authors used a self-organizing map to visualize high-dimensional, time-series motion data. The cyclic motion of one expert and five non-experts was captured as they pedaled a bicycle ergometer. A self-organizing map algorithm was used to display the corresponding circular motion trajectories on a two-dimensional motor skills map. The non-experts modified their motion to make their real-time motion trajectory approach that of the expert, thereby training themselves to imitate the expert motion. The root mean square error, which represents the difference between the non-expert motion and the expert motion, was significantly reduced upon using the proposed visual-feedback system. This indicates that the non-expert subjects successfully approximated the expert motion by repeated comparison of their trajectories on the motor skills map with that of the expert. The results demonstrate that the self-organizing map algorithm provides a unique way to visualize human movement and greatly facilitates the task of imitating a desired motion. By capturing the appropriate movements for display in the visual-feedback system, the proposed framework may be adopted for sports training or clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
26

McGloin, Rory, and Marina Krcmar. "The Impact of Controller Naturalness on Spatial Presence, Gamer Enjoyment, and Perceived Realism in a Tennis Simulation Video Game." Presence: Teleoperators and Virtual Environments 20, no. 4 (August 1, 2011): 309–24. http://dx.doi.org/10.1162/pres_a_00053.

Full text
Abstract:
The introduction and popularity of the Nintendo Wii home console has brought attention to the natural mapping motion capturing controller. Using a sample that identified sports as their most frequently played video games, a mental models approach was used to test the impact that perceived controller naturalness (traditional controller vs. natural mapping motion capturing controller) had on perceptions of spatial presence, realism, and enjoyment. The results showed that perceived video game realism is a predictor of spatial presence and enjoyment. Furthermore, the results supported predictions that controller naturalness would influence perceived video game realism of graphics and sound. Future research should investigate whether or not these controllers lead to greater presence and enjoyment in different genres of games (e.g., first-person shooters). In addition, future research should consider whether or not these controllers have the ability to prime violent mental models.
APA, Harvard, Vancouver, ISO, and other styles
27

Mori, Taisei, Yohei Ogino, Akihiro Matsuda, and Yumiko Funabashi. "Evaluation of 3-Axial Knee Joint Torques Produced by Compression Sports Tights in Running Motion." Proceedings 49, no. 1 (June 15, 2020): 69. http://dx.doi.org/10.3390/proceedings2020049069.

Full text
Abstract:
In this paper, 3-axial knee joint torques given by compression sports tights were performed by numerical simulations using 3-dimensional computer graphics of a human model. Running motions of the human model were represented as the 3-dimensional computer graphics, and the running motions were determined by the motion capturing system of human subjects. Strain distribution on the surface of the 3-dimentional computer graphics of the human model was applied to the boundary conditions of the numerical simulations. An anisotropic hyperelastic model considering stress softening of fabric materials was implemented to reproduce the mechanical characteristics of the compression sports tights. Based on the strain-time relationships, knee joint torques in 3-dimentional coordinates given by the compression sports tights were calculated. As a result, the three types of knee joint torque generated by the compression sports tights in running motions were calculated. From the calculated results, the maximum value of flexion/extension, varus/valgus, and internal/external knee joint torques were given as 2.52, 0.59, and 0.31 Nm, respectively. The effect of compression sports tights on the knee joint was investigated.
APA, Harvard, Vancouver, ISO, and other styles
28

Stankiewicz, Lukas, Carsten Thomas, Jochen Deuse, and Bernd Kuhlenkötter. "Application of Customizable Robot Assistance Systems to Compensate Age-Related Restrictions of the Musculoskeletal System for Assembly Workplaces." Applied Mechanics and Materials 840 (June 2016): 82–90. http://dx.doi.org/10.4028/www.scientific.net/amm.840.82.

Full text
Abstract:
Due to the demographic change, hybrid work systems increasingly gain importance. Especially, robot based assistance systems show potential to respond individually to employee’s performance parameters. Existing technologies offer possibilities to capture individual performance parameters which can be transferred into a digital environment. Identified impairments, e.g. of the musculoskeletal system, can be used to design an individual work environment with human-robot collaboration that fits the employee’s needs to guarantee a low risk of physical harm due to work related strain. Following the employee’s capabilities, the simulation reveals stressful activities that can be transferred to the robot. Thus, the work system offers the opportunity to individually respond to the employee and the given tasks by creating a work situation that suits the employee’s preconditions. This paper presents an approach for capturing individual physical performance parameters in form of movement restrictions by a motion capturing system without markers and the transmission of the motion data into a digital human model. It will be shown how the simulation can be used to design a needs-based work place by integration of a robot based assistance system.
APA, Harvard, Vancouver, ISO, and other styles
29

Rahmati, Yalda, Alireza Talebpour, Archak Mittal, and James Fishelson. "Game Theory-Based Framework for Modeling Human–Vehicle Interactions on the Road." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 9 (July 17, 2020): 701–13. http://dx.doi.org/10.1177/0361198120931513.

Full text
Abstract:
New application domains have faded the barriers between humans and robots, introducing a new set of complexities to robotic systems. The major impediment is the uncertainties associated with human decision making, which makes it challenging to predict human behavior. A realistic model of human behavior is thus vital to capture humans’ interactive behavior with their surroundings and provide robots with reliable estimates on what is most likely to happen. Focusing on operations of connected and automated vehicles (CAVs) in areas with a high presence of human actors (i.e., pedestrians), this study creates an interactive decision-making framework to predict pedestrians’ trajectories when walking in a shared environment with vehicles and other pedestrians. It develops a game theoretical structure to approximate the movement and directional components of pedestrian motion using the theory of Nash equilibria in non-cooperative games. It also introduces a novel payoff structure to address the inherent uncertainties in human behavior. Ground truth pedestrian trajectories are then used to calibrate the game parameters and evaluate the model’s performance in approximating the motion decisions of human agents in interaction with interfering vehicles and pedestrians. The main contribution of the study is to develop an interactive human–vehicle decision-making framework toward realizing human–vehicle coexistence by capturing the effect of pedestrian–vehicle and pedestrian–pedestrian interactions on choice of walking strategies. The derived knowledge could be used in CAV navigation algorithms to provide the vehicle with more accurate predictions of pedestrian behavior, and in turn, improve CAV motion planning in human-populated areas.
APA, Harvard, Vancouver, ISO, and other styles
30

Rajesh Kannan, Megalingam, Menon Deepansh, Ajithkumar Nitin, and Saboo Nihil. "Implementation of Gesture Control in Robotic Arm Using Kinect Module." Applied Mechanics and Materials 786 (August 2015): 378–82. http://dx.doi.org/10.4028/www.scientific.net/amm.786.378.

Full text
Abstract:
This research work is targeted at building and analyzing a robotic arm which mimics the motion of the human arm of the user. The propsed system monitors the motion of the user’s arm using a Kinect. Using the “Kinect Skeletal Image” project of Kinect SDK, a skeletal image of the arm is obtained which consists of 3 joints and links connecting them. 3-D Coordinate Geometry techniques are used to compute the angles obtained between the links. This corresponds to the angles made by the different segments of the human arm. In this work we present the capturing of human hand gestures by Kinect and analyzing it with suitable algorithms to identify the joints and angles. Also the arduino based microcontroller used for processing Kinect data is presented.
APA, Harvard, Vancouver, ISO, and other styles
31

Fan, Kangqi, Meiling Cai, Haiyan Liu, and Yiwei Zhang. "Capturing energy from ultra-low frequency vibrations and human motion through a monostable electromagnetic energy harvester." Energy 169 (February 2019): 356–68. http://dx.doi.org/10.1016/j.energy.2018.12.053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kamali, Kaveh, Ali Akbar Akbari, Christian Desrosiers, Alireza Akbarzadeh, Martin J. D. Otis, and Johannes C. Ayena. "Low-Rank and Sparse Recovery of Human Gait Data." Sensors 20, no. 16 (August 13, 2020): 4525. http://dx.doi.org/10.3390/s20164525.

Full text
Abstract:
Due to occlusion or detached markers, information can often be lost while capturing human motion with optical tracking systems. Based on three natural properties of human gait movement, this study presents two different approaches to recover corrupted motion data. These properties are used to define a reconstruction model combining low-rank matrix completion of the measured data with a group-sparsity prior on the marker trajectories mapped in the frequency domain. Unlike most existing approaches, the proposed methodology is fully unsupervised and does not need training data or kinematic information of the user. We evaluated our methods on four different gait datasets with various gap lengths and compared their performance with a state-of-the-art approach using principal component analysis (PCA). Our results showed recovering missing data more precisely, with a reduction of at least 2 mm in mean reconstruction error compared to the literature method. When a small number of marker trajectories is available, our findings showed a reduction of more than 14 mm for the mean reconstruction error compared to the literature approach.
APA, Harvard, Vancouver, ISO, and other styles
33

Van Nimmen, Katrien, Guoping Zhao, André Seyfarth, and Peter Van den Broeck. "A Robust Methodology for the Reconstruction of the Vertical Pedestrian-Induced Load from the Registered Body Motion." Vibration 1, no. 2 (November 7, 2018): 250–68. http://dx.doi.org/10.3390/vibration1020018.

Full text
Abstract:
This paper proposes a methodology to reconstruct the vertical GRFs from the registered body motion that is reasonably robust against measurement noise. The vertical GRFs are reconstructed from the experimentally identified time-variant pacing rate and a generalised single-step load model available in the literature. The proposed methodology only requires accurately capturing the body motion within the frequency range 1–10 Hz and does not rely on the exact magnitude of the registered signal. The methodology can therefore also be applied when low-cost sensors are used and to minimize the impact of soft-tissue artefacts. In addition, the proposed procedure can be applied regardless of the position of the sensor on the human body, as long as the recorded body motion allows for identifying the time of a nominally identical event in successive walking cycles. The methodology is illustrated by a numerical example and applied to an experimental dataset where the ground reaction forces and the body motion were registered simultaneously. The results show that the proposed methodology allows for arriving at a good estimate of the vertical ground reaction forces. When the impact of soft-tissue artefacts is low, a comparable estimate can be obtained using Newton’s second law of motion.
APA, Harvard, Vancouver, ISO, and other styles
34

Davarzani, Samaneh, David Saucier, Preston Peranich, Will Carroll, Alana Turner, Erin Parker, Carver Middleton, et al. "Closing the Wearable Gap—Part VI: Human Gait Recognition Using Deep Learning Methodologies." Electronics 9, no. 5 (May 12, 2020): 796. http://dx.doi.org/10.3390/electronics9050796.

Full text
Abstract:
A novel wearable solution using soft robotic sensors (SRS) has been investigated to model foot-ankle kinematics during gait cycles. The capacitance of SRS related to foot-ankle basic movements was quantified during the gait movements of 20 participants on a flat surface as well as a cross-sloped surface. In order to evaluate the power of SRS in modeling foot-ankle kinematics, three-dimensional (3D) motion capture data was also collected for analyzing gait movement. Three different approaches were employed to quantify the relationship between the SRS and the 3D motion capture system, including multivariable linear regression, an artificial neural network (ANN), and a time-series long short-term memory (LSTM) network. Models were compared based on the root mean squared error (RMSE) of the prediction of the joint angle of the foot in the sagittal and frontal plane, collected from the motion capture system. There was not a significant difference between the error rates of the three different models. The ANN resulted in an average RMSE of 3.63, being slightly more successful in comparison to the average RMSE values of 3.94 and 3.98 resulting from multivariable linear regression and LSTM, respectively. The low error rate of the models revealed the high performance of SRS in capturing foot-ankle kinematics during the human gait cycle.
APA, Harvard, Vancouver, ISO, and other styles
35

Hellsten, Thomas, Jonny Karlsson, Muhammed Shamsuzzaman, and Göran Pulkkis. "The Potential of Computer Vision-Based Marker-Less Human Motion Analysis for Rehabilitation." Rehabilitation Process and Outcome 10 (January 2021): 117957272110223. http://dx.doi.org/10.1177/11795727211022330.

Full text
Abstract:
Background: Several factors, including the aging population and the recent corona pandemic, have increased the need for cost effective, easy-to-use and reliable telerehabilitation services. Computer vision-based marker-less human pose estimation is a promising variant of telerehabilitation and is currently an intensive research topic. It has attracted significant interest for detailed motion analysis, as it does not need arrangement of external fiducials while capturing motion data from images. This is promising for rehabilitation applications, as they enable analysis and supervision of clients’ exercises and reduce clients’ need for visiting physiotherapists in person. However, development of a marker-less motion analysis system with precise accuracy for joint identification, joint angle measurements and advanced motion analysis is an open challenge. Objectives: The main objective of this paper is to provide a critical overview of recent computer vision-based marker-less human pose estimation systems and their applicability for rehabilitation application. An overview of some existing marker-less rehabilitation applications is also provided. Methods: This paper presents a critical review of recent computer vision-based marker-less human pose estimation systems with focus on their provided joint localization accuracy in comparison to physiotherapy requirements and ease of use. The accuracy, in terms of the capability to measure the knee angle, is analysed using simulation. Results: Current pose estimation systems use 2D, 3D, multiple and single view-based techniques. The most promising techniques from a physiotherapy point of view are 3D marker-less pose estimation based on a single view as these can perform advanced motion analysis of the human body while only requiring a single camera and a computing device. Preliminary simulations reveal that some proposed systems already provide a sufficient accuracy for 2D joint angle estimations. Conclusions: Even though test results of different applications for some proposed techniques are promising, more rigour testing is required for validating their accuracy before they can be widely adopted in advanced rehabilitation applications.
APA, Harvard, Vancouver, ISO, and other styles
36

Gia Hoang, Phan, H. Asif, and Domenico Campolo. "Preparation for Capturing Human Skills during Tooling Tasks Using Redundant Markers and Instrumented Tool." Applied Mechanics and Materials 842 (June 2016): 293–302. http://dx.doi.org/10.4028/www.scientific.net/amm.842.293.

Full text
Abstract:
In recent years, robots have found extensive applications in automating repetitive, defined, position dependent tasks such as painting and material handling. However, continuous contact type tasks (such as finishing, deburring and grinding) that require both position and force control are still carried out manually by skilled labor. Majorly, because it is difficult to program experienced user skills in a robotic setup without having clear knowledge of underlying model used by the operators. In this paper we present a preparation for capturing human operator’s dynamics using an instrumented hand-held tool and a motion capture setup. We first present the design of an instrumented tool and later present a method for reliably capturing kinematics using redundant markers removing effects of marker occlusions, and effect of gravity caused by the tool's mass. Kinematic information is used for deriving the forces/torques on the tool end effector.
APA, Harvard, Vancouver, ISO, and other styles
37

Lai, Guang Jin. "Study on Enhancing Terminal Identification in Track and Field Using Digital X-Ray Photography Image." Advanced Materials Research 989-994 (July 2014): 3851–55. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.3851.

Full text
Abstract:
Digital X-ray photography technology is under the control of the computer, to use one-dimensional or 2D X-ray detector to convert the captured image into digital signals directly to using image processing technology. It can realize the function of image analysis. We introduce X-ray photography technology into the terminal identification in track and field, and use the clustering algorithm to improve computer image clustering algorithm. Through capturing the digital signal of human head, arms and legs, it enhances the terminal recognition method in track and field. Finally we use MATLAB to calculate the captured image value of X-ray photography. Through calculation, motion capture and recognition of X-ray image are enhanced obviously. It provides a theoretical basis for researching on motion capture technology in track and field.
APA, Harvard, Vancouver, ISO, and other styles
38

Ali, Sharifnezhad, Abdollahzadekan Mina, Shafieian Mehdi, and Sahafnejad-Mohammadi Iman. "C3D data based on 2-dimensional images from video camera." Annals of Biomedical Science and Engineering 5, no. 1 (January 13, 2021): 001–5. http://dx.doi.org/10.29328/journal.abse.1001010.

Full text
Abstract:
The Human three-dimensional (3D) musculoskeletal model is based on motion analysis methods and can be obtained by particular motion capture systems that export 3D data with coordinate 3D (C3D) format. Unique cameras and specific software are essential for analyzing the data. This equipment is quite expensive, and using them is time-consuming. This research intends to use ordinary video cameras and open source systems to get 3D data and create a C3D format due to these problems. By capturing movements with two video cameras, marker coordination is obtainable using Skill-Spector. To create C3D data from 3D coordinates of the body points, MATLAB functions were used. The subject was captured simultaneously with both the Cortex system and two video cameras during each validation test. The mean correlation coefficient of datasets is 0.7. This method can be used as an alternative method for motion analysis due to a more detailed comparison. The C3D data collection, which we presented in this research, is more accessible and cost-efficient than other systems. In this method, only two cameras have been used.
APA, Harvard, Vancouver, ISO, and other styles
39

Guo, Haitao, and Yunsick Sung. "Movement Estimation Using Soft Sensors Based on Bi-LSTM and Two-Layer LSTM for Human Motion Capture." Sensors 20, no. 6 (March 24, 2020): 1801. http://dx.doi.org/10.3390/s20061801.

Full text
Abstract:
The importance of estimating human movement has increased in the field of human motion capture. HTC VIVE is a popular device that provides a convenient way of capturing human motions using several sensors. Recently, the motion of only users’ hands has been captured, thereby greatly reducing the range of motion captured. This paper proposes a framework to estimate single-arm orientations using soft sensors mainly by combining a Bi-long short-term memory (Bi-LSTM) and two-layer LSTM. Positions of the two hands are measured using an HTC VIVE set, and the orientations of a single arm, including its corresponding upper arm and forearm, are estimated using the proposed framework based on the estimated positions of the two hands. Given that the proposed framework is meant for a single arm, if orientations of two arms are required to be estimated, the estimations are performed twice. To obtain the ground truth of the orientations of single-arm movements, two Myo gesture-control sensory armbands are employed on the single arm: one for the upper arm and the other for the forearm. The proposed framework analyzed the contextual features of consecutive sensory arm movements, which provides an efficient way to improve the accuracy of arm movement estimation. In comparison with the ground truth, the proposed method estimated the arm movements using a dynamic time warping distance, which was the average of 73.90% less than that of a conventional Bayesian framework. The distinct feature of our proposed framework is that the number of sensors attached to end-users is reduced. Additionally, with the use of our framework, the arm orientations can be estimated with any soft sensor, and good accuracy of the estimations can be ensured. Another contribution is the suggestion of the combination of the Bi-LSTM and two-layer LSTM.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Lei, Gary Feng, Chee Wee Leong, Jilliam Joe, Christopher Kitchen, and Chong Min Lee. "Designing An Automated Assessment of Public Speaking Skills Using Multimodal Cues." Journal of Learning Analytics 3, no. 2 (September 17, 2016): 261–81. http://dx.doi.org/10.18608/jla.2016.32.13.

Full text
Abstract:
Traditional assessments of public speaking skills rely on human scoring. We report an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. Task design, rubric development, and human rating were conducted according to standards in educational assessment. An initial corpus of 17 speakers with 4 speaking tasks was collected using audio, video, and 3D motion capturing devices. A scoring model based on basic features in the speech content, speech delivery, and hand, body, and head movements significantly predicts human rating, suggesting the feasibility of using multimodal technologies in the assessment of public speaking skills.
APA, Harvard, Vancouver, ISO, and other styles
41

Bleser, Gabriele, Bertram Taetz, Markus Miezal, Corinna A. Christmann, Daniel Steffen, and Katja Regenspurger. "Development of an Inertial Motion Capture System for Clinical Application." i-com 16, no. 2 (August 28, 2017): 113–29. http://dx.doi.org/10.1515/icom-2017-0010.

Full text
Abstract:
AbstractThe ability to capture human motion based on wearable sensors has a wide range of applications, e.g., in healthcare, sports, well-being, and workflow analysis. This article focuses on the development of an online-capable system for accurately capturing joint kinematics based on inertial measurement units (IMUs) and its clinical application, with a focus on locomotion analysis for rehabilitation. The article approaches the topic from the technology and application perspectives and fuses both points of view. It presents, in a self-contained way, previous results from three studies as well as new results concerning the technological development of the system. It also correlates these with new results from qualitative expert interviews with medical practitioners and movement scientists. The interviews were conducted for the purpose of identifying relevant application scenarios and requirements for the technology used. As a result, the potentials of the system for the different identified application scenarios are discussed and necessary next steps are deduced from this analysis.
APA, Harvard, Vancouver, ISO, and other styles
42

Kim, Sung-Min, T. Jesse Lim, Josemaria Paterno, Jon Park, and Daniel H. Kim. "Biomechanical comparison: stability of lateral-approach anterior lumbar interbody fusion and lateral fixation compared with anterior-approach anterior lumbar interbody fusion and posterior fixation in the lower lumbar spine." Journal of Neurosurgery: Spine 2, no. 1 (January 2005): 62–68. http://dx.doi.org/10.3171/spi.2005.2.1.0062.

Full text
Abstract:
Object. The stability of lateral lumbar interbody graft—augmented fusion and supplementary lateral plate fixation in human cadavers has not been determined. The purpose of this study was to investigate the immediate biomechanical stabilities of the following: 1) femoral ring allograft (FRA)—augmented anterior lumbar interbody fusion (ALIF) after left lateral discectomy combined with additional lateral MACS HMA plate and screw fixation; and 2) ALIF combined with posterior transpedicular fixation after anterior discectomy. Methods. Sixteen human lumbosacral spines were loaded with six modes of motion. The intervertebral motion was measured using a video-based motion-capturing system. The range of motion (ROM) and the neutral zone (NZ) in each loading mode were compared with a maximum of 7.5 Nm. The ROM values for both stand-alone ALIF approaches were similar to those of the intact spine, whereas NZ measurements were higher in most loading modes. No significant intergroup differences were found. The ROM and NZ values for lateral fixation in all modes were significantly lower than those of intact spine, except when NZ was measured in lateral bending. All ROM and NZ values for transpedicular fixation were significantly lower than those for stand-alone anterior ALIF. Transpedicular fixation conferred better stabilization than lateral fixation in flexion, extension, and lateral bending modes. Conclusions. Neither approach to stand-alone FRA-augmented ALIF provided sufficient stabilization, but supplementary instrumentation conferred significant stabilization. The MACS HMA plate and screw fixation system, although inferior to posterior transpedicular fixation, provided adequate stability compared with the intact spine and can serve as a sound alternative to supplementary spinal stabilization.
APA, Harvard, Vancouver, ISO, and other styles
43

Feichtenhofer, Christoph, Axel Pinz, Richard P. Wildes, and Andrew Zisserman. "Deep Insights into Convolutional Networks for Video Recognition." International Journal of Computer Vision 128, no. 2 (October 29, 2019): 420–37. http://dx.doi.org/10.1007/s11263-019-01225-w.

Full text
Abstract:
Abstract As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.
APA, Harvard, Vancouver, ISO, and other styles
44

Han Keat, Lee, and Chuah Chai Wen. "Smart Indoor Home Surveillance Monitoring System Using Raspberry Pi." JOIV : International Journal on Informatics Visualization 2, no. 4-2 (September 10, 2018): 299. http://dx.doi.org/10.30630/joiv.2.4-2.172.

Full text
Abstract:
Internet of Things (IoTs) are internet computing devices which are connected to everyday objects that can receive and transmit data intelligently. IoTs allow human to interact and control everyday objects wirelessly to provide more convenience in their lifestyle. The Raspberry Pi is a small, lightweight and cheap single board computer that can fit on human’s palm. Security plays a big role in a home. People concern about security by preventing any intruders to enter their home. This is to prevent loss of privacy and assets. The closed-circuit television (CCTV) is one of the device used to monitor the secured area for any intruders. The use of traditional CCTV to monitor the secured area have three limitations, which are requiring a huge volume of storage to store all the videos regardless there are intruders or not, does not notify the users immediately when there are motions detected, and users must always check the CCTV recorded videos regularly to identity any intruders. Therefore, a smart surveillance monitoring system is proposed to solve this problem by detecting intruders and capturing image of the intruder. Notifications will also be sent to the user immediately when motions are detected. This smart surveillance monitoring system only store the images of the intruders that triggered the motion sensor, making this system uses significantly less storage space. The proposed Raspberry Pi is connected with a passive infrared (PIR) motion sensor, a webcam and internet connection, the whole device can be configured to carry out the surveillance tasks. The objectives of this project are to design, implement and test the surveillance system using the Raspberry Pi. This proposed surveillance system provides the user with live stream of video feed for the user. Whenever a motion is detected by the PIR motion sensor, the web camera may capture an image of the intruder and alert the users (owners) through Short Message Service (SMS) and email notifications. The methodology used to develop this system is by using the object-oriented analysis and design (OOAD) model.
APA, Harvard, Vancouver, ISO, and other styles
45

Phutane, Uday, Anna-Maria Liphardt, Johanna Bräunig, Johann Penner, Michael Klebl, Koray Tascilar, Martin Vossiek, Arnd Kleyer, Georg Schett, and Sigrid Leyendecker. "Evaluation of Optical and Radar Based Motion Capturing Technologies for Characterizing Hand Movement in Rheumatoid Arthritis—A Pilot Study." Sensors 21, no. 4 (February 9, 2021): 1208. http://dx.doi.org/10.3390/s21041208.

Full text
Abstract:
In light of the state-of-the-art treatment options for patients with rheumatoid arthritis (RA), a detailed and early quantification and detection of impaired hand function is desirable to allow personalized treatment regiments and amend currently used subjective patient reported outcome measures. This is the motivation to apply and adapt modern measurement technologies to quantify, assess and analyze human hand movement using a marker-based optoelectronic measurement system (OMS), which has been widely used to measure human motion. We complement these recordings with data from markerless (Doppler radar) sensors and data from both sensor technologies are integrated with clinical outcomes of hand function. The technologies are leveraged to identify hand movement characteristics in RA affected patients in comparison to healthy control subjects, while performing functional tests, such as the Moberg-Picking-Up Test. The results presented discuss the experimental framework and present the limiting factors imposed by the use of marker-based measurements on hand function. The comparison of simple finger motion data, collected by the OMS, to data recorded by a simple continuous wave radar suggests that radar is a promising option for the objective assessment of hand function. Overall, the broad scope of integrating two measurement technologies with traditional clinical tests shows promising potential for developing new pathways in understanding of the role of functional outcomes for the RA pathology.
APA, Harvard, Vancouver, ISO, and other styles
46

Gardner, Glendon M., Michelle Conerty, James Castracane, and Steven M. Parnes. "Electronic Speckle Pattern Interferometry of the Vibrating Larynx." Annals of Otology, Rhinology & Laryngology 104, no. 1 (January 1995): 5–12. http://dx.doi.org/10.1177/000348949510400102.

Full text
Abstract:
Laser holography is a technique that creates a three-dimensional image of a static object. This technique can be applied to the analysis of vibrating structures. Electronic speckle pattern interferometry uses a laser for illumination of the vibrating object and solid state detectors and digital hardware technology for capturing and processing the image in real time. This was performed on a human cadaver larynx and is the first time an interferogram of vibrating vocal cords has ever been obtained. Dark and bright interference fringes are seen that represent the vibratory motion of the vocal folds. These are presented in still photos as well as real-time on videotape. This method can provide advantages over current techniques of laryngeal study: it is sensitive to motion in the vertical dimension, and the digital data can be quantitatively analyzed. Application of this technique to study the larynx should eventually be a valuable clinical tool and provide quantitative research data.
APA, Harvard, Vancouver, ISO, and other styles
47

Reining, Christopher, Friedrich Niemann, Fernando Moya Rueda, Gernot A. Fink, and Michael ten Hompel. "Human Activity Recognition for Production and Logistics—A Systematic Literature Review." Information 10, no. 8 (July 24, 2019): 245. http://dx.doi.org/10.3390/info10080245.

Full text
Abstract:
This contribution provides a systematic literature review of Human Activity Recognition for Production and Logistics. An initial list of 1243 publications that complies with predefined Inclusion Criteria was surveyed by three reviewers. Fifty-two publications that comply with the Content Criteria were analysed regarding the observed activities, sensor attachment, utilised datasets, sensor technology and the applied methods of HAR. This review is focused on applications that use marker-based Motion Capturing or Inertial Measurement Units. The analysed methods can be deployed in industrial application of Production and Logistics or transferred from related domains into this field. The findings provide an overview of the specifications of state-of-the-art HAR approaches, statistical pattern recognition and deep architectures and they outline a future road map for further research from a practitioner’s perspective.
APA, Harvard, Vancouver, ISO, and other styles
48

Palm, Rainer, and Boyko Iliev. "Programming-by-Demonstration and Adaptation of Robot Skills by Fuzzy Time Modeling." International Journal of Humanoid Robotics 11, no. 01 (March 2014): 1450009. http://dx.doi.org/10.1142/s0219843614500091.

Full text
Abstract:
Robot skills are motion or grasping primitives from which a complicated robot task consists. Skills can be directly learned and recognized by a technique named programming-by-demonstration. A human operator demonstrates a set of reference skills where his motions are recorded by a data-capturing system and modeled via fuzzy clustering and a Takagi–Sugeno modeling technique. The skill models use time instants as input and operator actions as outputs. In the recognition phase, the robot identifies the skill shown by the operator in a novel test demonstration. Finally, using the corresponding reference skill model the robot executes the recognized skill. Skill models can be updated online where drastic differences between learned and real world conditions are eliminated using the Broyden update formula. This method was extended for fuzzy models especially for time cluster models.
APA, Harvard, Vancouver, ISO, and other styles
49

MELITA, RAHMI AGUS, SUSETYO BAGAS BHASKORO, and RUMINTO SUBEKTI. "Pengendalian Kamera berdasarkan Deteksi Posisi Manusia Bergerak Jatuh berbasis Multi Sensor Accelerometer dan Gyroscope." ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 6, no. 2 (July 9, 2018): 259. http://dx.doi.org/10.26760/elkomika.v6i2.259.

Full text
Abstract:
ABSTRAKPenelitian ini menyajikan pengembangan sistem surveillans berbasis multisensor secara portable dengan memberikan peringatan terkait aktifitas yang tidak biasa. Sistem menginformasikan pengguna atau pengasuh melalui email ketika mendeteksi aktifitas yang abnormal, seperti gerakan jatuh (lansia atau anakanak). Penelitian ini menggunakan multisensor antara lain accelerometer, gyroscope, dan menambahkan sensor kamera untuk membuat informasi lebih akurat. Evaluasi dibagi menjadi dua kategori. kategori pertama adalah deteksi jatuh manusia, dan kategori kedua adalah menangkap gambar. Hasil evaluasi mendeteksi gerakan jatuh adalah accuracy sebesar 88%, recall 88%, specificity 88%, dan precision 93%. Selain itu, hasil evaluasi pengambilan gambar adalah accuracy 86% dengan ketepatan pergerakan kamera ke arah objek sebesar 51%.Kata kunci: bergerak jatuh, kamera, internet of things, accelerometer, gyroscope, fuzzy logic. ABSTRACTThis research presents the development of multi-sensor based portable surveilance system with intrusion alert notification. The system will notify theuser or caregiver by email immediately when an abnormal activity is detected, such as falling motion (elderly or children). This research using multisensor there are accelerometer, gyroscope, and adding camera sensor to make information more accurate. The evaluation divided into two categories. first category is human falling detection, and second category is capturing image. The result of falling detection are 88% for accuracy, 88% for recall, 88% for specificity, and precision is 93%. The result of capturing image are 86% for accuracy 86%, with camera motor movement precision is 51%.Keywords: falling motion, camera, internet of things, accelerometer, gyroscope, fuzzy logic.
APA, Harvard, Vancouver, ISO, and other styles
50

Weng, Zhengkui, Zhipeng Jin, Shuangxi Chen, Quanquan Shen, Xiangyang Ren, and Wuzhao Li. "Attention-Based Temporal Encoding Network with Background-Independent Motion Mask for Action Recognition." Computational Intelligence and Neuroscience 2021 (March 29, 2021): 1–16. http://dx.doi.org/10.1155/2021/8890808.

Full text
Abstract:
Convolutional neural network (CNN) has been leaping forward in recent years. However, the high dimensionality, rich human dynamic characteristics, and various kinds of background interference increase difficulty for traditional CNNs in capturing complicated motion data in videos. A novel framework named the attention-based temporal encoding network (ATEN) with background-independent motion mask (BIMM) is proposed to achieve video action recognition here. Initially, we introduce one motion segmenting approach on the basis of boundary prior by associating with the minimal geodesic distance inside a weighted graph that is not directed. Then, we propose one dynamic contrast segmenting strategic procedure for segmenting the object that moves within complicated environments. Subsequently, we build the BIMM for enhancing the object that moves based on the suppression of the not relevant background inside the respective frame. Furthermore, we design one long-range attention system inside ATEN, capable of effectively remedying the dependency of sophisticated actions that are not periodic in a long term based on the more automatic focus on the semantical vital frames other than the equal process for overall sampled frames. For this reason, the attention mechanism is capable of suppressing the temporal redundancy and highlighting the discriminative frames. Lastly, the framework is assessed by using HMDB51 and UCF101 datasets. As revealed from the experimentally achieved results, our ATEN with BIMM gains 94.5% and 70.6% accuracy, respectively, which outperforms a number of existing methods on both datasets.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography