Статті в журналах з теми "Human Motion Tracking, Markerless Motion Capture"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Human Motion Tracking, Markerless Motion Capture.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Human Motion Tracking, Markerless Motion Capture".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

SNIDARO, LAURO, GIAN LUCA FORESTI, and LUCA CHITTARO. "TRACKING HUMAN MOTION FROM MONOCULAR SEQUENCES." International Journal of Image and Graphics 08, no. 03 (July 2008): 455–71. http://dx.doi.org/10.1142/s0219467808003180.

Повний текст джерела
Анотація:
In recent years, analysis of human motion has become an increasingly relevant research topic with applications as diverse as animation, virtual reality, security, and advanced human-machine interfaces. In particular, motion capture systems are well known nowadays since they are used in the movie industry. These systems require expensive multi-camera setups or markers to be worn by the user. This paper describes an attempt to provide a markerless low cost and real-time solution for home users. We propose a novel approach for robust detection and tracking of the user's body joints that exploits different algorithms as different sources of information and fuses their estimates with particle filters. This system may be employed for real-time animation of VRML or X3D avatars using an off-the-shelf digital camera and a standard PC.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tay, Chuan Zhi, King Hann Lim, and Jonathan Then Sien Phang. "Markerless gait estimation and tracking for postural assessment." Multimedia Tools and Applications 81, no. 9 (February 21, 2022): 12777–94. http://dx.doi.org/10.1007/s11042-022-12026-8.

Повний текст джерела
Анотація:
AbstractPostural assessment is crucial in the sports screening system to reduce the risk of severe injury. The capture of the athlete’s posture using computer vision attracts huge attention in the sports community due to its markerless motion capture and less interference in the physical training. In this paper, a novel markerless gait estimation and tracking algorithm is proposed to locate human key-points in spatial-temporal sequences for gait analysis. First, human pose estimation using OpenPose network to detect 14 core key-points from the human body. The ratio of body joints is normalized with neck-to-pelvis distance to obtain camera invariant key-points. These key-points are subsequently used to generate a spatial-temporal sequences and it is fed into Long-Short-Term-Memory network for gait recognition. An indexed person is tracked for quick local pose estimation and postural analysis. This proposed algorithm can automate the capture of human joints for postural assessment to analyze the human motion. The proposed system is implemented on Intel Up Squared Board and it can achieve up to 9 frames-per-second with 95% accuracy of gait recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

SABOUNE, JAMAL, and FRANÇOIS CHARPILLET. "MARKERLESS HUMAN MOTION TRACKING FROM A SINGLE CAMERA USING INTERVAL PARTICLE FILTERING." International Journal on Artificial Intelligence Tools 16, no. 04 (August 2007): 593–609. http://dx.doi.org/10.1142/s021821300700345x.

Повний текст джерела
Анотація:
In this paper we present a new approach for marker less human motion capture from conventional camera feeds. The aim of our study is to recover 3D positions of key points of the body that can serve for gait analysis. Our approach is based on foreground extraction, an articulated body model and particle filters. In order to be generic and simple, no restrictive dynamic modeling was used. A new modified particle-filtering algorithm was introduced. It is used efficiently to search the model configurations space. This new algorithm, which we call Interval Particle Filtering, reorganizes the configurations search space in an optimal deterministic way and proved to be efficient in tracking natural human movement. Results for human motion capture from a single camera are presented and compared to results obtained from a marker based system. The system proved to be able to track motion successfully even in partial occlusions and even outdoors.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gionfrida, Letizia, Wan M. R. Rusli, Anil A. Bharath, and Angela E. Kedgley. "Validation of two-dimensional video-based inference of finger kinematics with pose estimation." PLOS ONE 17, no. 11 (November 3, 2022): e0276799. http://dx.doi.org/10.1371/journal.pone.0276799.

Повний текст джерела
Анотація:
Accurate capture finger of movements for biomechanical assessments has typically been achieved within laboratory environments through the use of physical markers attached to a participant’s hands. However, such requirements can narrow the broader adoption of movement tracking for kinematic assessment outside these laboratory settings, such as in the home. Thus, there is the need for markerless hand motion capture techniques that are easy to use and accurate enough to evaluate the complex movements of the human hand. Several recent studies have validated lower-limb kinematics obtained with a marker-free technique, OpenPose. This investigation examines the accuracy of OpenPose, when applied to images from single RGB cameras, against a ‘gold standard’ marker-based optical motion capture system that is commonly used for hand kinematics estimation. Participants completed four single-handed activities with right and left hands, including hand abduction and adduction, radial walking, metacarpophalangeal (MCP) joint flexion, and thumb opposition. The accuracy of finger kinematics was assessed using the root mean square error. Mean total active flexion was compared using the Bland–Altman approach, and the coefficient of determination of linear regression. Results showed good agreement for abduction and adduction and thumb opposition activities. Lower agreement between the two methods was observed for radial walking (mean difference between the methods of 5.03°) and MCP flexion (mean difference of 6.82°) activities, due to occlusion. This investigation demonstrated that OpenPose, applied to videos captured with monocular cameras, can be used for markerless motion capture for finger tracking with an error below 11° and on the order of that which is accepted clinically.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Connie, Tee, Timilehin B. Aderinola, Thian Song Ong, Michael Kah Ong Goh, Bayu Erfianto, and Bedy Purnama. "Pose-Based Gait Analysis for Diagnosis of Parkinson’s Disease." Algorithms 15, no. 12 (December 12, 2022): 474. http://dx.doi.org/10.3390/a15120474.

Повний текст джерела
Анотація:
Parkinson’s disease (PD) is a neurodegenerative disorder that is more common in elderly people and affects motor control, flexibility, and how easily patients adapt to their walking environments. PD is progressive in nature, and if undetected and untreated, the symptoms grow worse over time. Fortunately, PD can be detected early using gait features since the loss of motor control results in gait impairment. In general, techniques for capturing gait can be categorized as computer-vision-based or sensor-based. Sensor-based techniques are mostly used in clinical gait analysis and are regarded as the gold standard for PD detection. The main limitation of using sensor-based gait capture is the associated high cost and the technical expertise required for setup. In addition, the subjects’ consciousness of worn sensors and being actively monitored may further impact their motor function. Recent advances in computer vision have enabled the tracking of body parts in videos in a markerless motion capture scenario via human pose estimation (HPE). Although markerless motion capture has been studied in comparison with gold-standard motion-capture techniques, it is yet to be evaluated in the prediction of neurological conditions such as PD. Hence, in this study, we extract PD-discriminative gait features from raw videos of subjects and demonstrate the potential of markerless motion capture for PD prediction. First, we perform HPE on the subjects using AlphaPose. Then, we extract and analyse eight features, from which five features are systematically selected, achieving up to 93% accuracy, 96% precision, and 92% recall in arbitrary views.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Guidolin, Mattia, Emanuele Menegatti, and Monica Reggiani. "UNIPD-BPE: Synchronized RGB-D and Inertial Data for Multimodal Body Pose Estimation and Tracking." Data 7, no. 6 (June 9, 2022): 79. http://dx.doi.org/10.3390/data7060079.

Повний текст джерела
Анотація:
The ability to estimate human motion without requiring any external on-body sensor or marker is of paramount importance in a variety of fields, ranging from human–robot interaction, Industry 4.0, surveillance, and telerehabilitation. The recent development of portable, low-cost RGB-D cameras pushed forward the accuracy of markerless motion capture systems. However, despite the widespread use of such sensors, a dataset including complex scenes with multiple interacting people, recorded with a calibrated network of RGB-D cameras and an external system for assessing the pose estimation accuracy, is still missing. This paper presents the University of Padova Body Pose Estimation dataset (UNIPD-BPE), an extensive dataset for multi-sensor body pose estimation containing both single-person and multi-person sequences with up to 4 interacting people. A network with 5 Microsoft Azure Kinect RGB-D cameras is exploited to record synchronized high-definition RGB and depth data of the scene from multiple viewpoints, as well as to estimate the subjects’ poses using the Azure Kinect Body Tracking SDK. Simultaneously, full-body Xsens MVN Awinda inertial suits allow obtaining accurate poses and anatomical joint angles, while also providing raw data from the 17 IMUs required by each suit. This dataset aims to push forward the development and validation of multi-camera markerless body pose estimation and tracking algorithms, as well as multimodal approaches focused on merging visual and inertial data.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Dianyong, Zhenjiang Miao, Shengyong Chen, and Lili Wan. "Optimization and Soft Constraints for Human Shape and Pose Estimation Based on a 3D Morphable Model." Mathematical Problems in Engineering 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/715808.

Повний текст джерела
Анотація:
We propose an approach about multiview markerless motion capture based on a 3D morphable human model. This morphable model was learned from a database of registered 3D body scans in different shapes and poses. We implement pose variation of body shape by the defined underlying skeleton. At the initialization step, we adapt the 3D morphable model to the multi-view images by changing its shape and pose parameters. Then, for the tracking step, we implement a method of combining the local and global algorithm to do the pose estimation and surface tracking. And we add the human pose prior information as a soft constraint to the energy of a particle. When it meets an error after the local algorithm, we can fix the error using less particles and iterations. We demonstrate the improvements with estimating result from a multi-view image sequence.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Baclig, Maria Martine, Noah Ergezinger, Qipei Mei, Mustafa Gül, Samer Adeeb, and Lindsey Westover. "A Deep Learning and Computer Vision Based Multi-Player Tracker for Squash." Applied Sciences 10, no. 24 (December 9, 2020): 8793. http://dx.doi.org/10.3390/app10248793.

Повний текст джерела
Анотація:
Sports pose a unique challenge for high-speed, unobtrusive, uninterrupted motion tracking due to speed of movement and player occlusion, especially in the fast and competitive sport of squash. The objective of this study is to use video tracking techniques to quantify kinematics in elite-level squash. With the increasing availability and quality of elite tournament matches filmed for entertainment purposes, a new methodology of multi-player tracking for squash that only requires broadcast video as an input is proposed. This paper introduces and evaluates a markerless motion capture technique using an autonomous deep learning based human pose estimation algorithm and computer vision to detect and identify players. Inverse perspective mapping is utilized to convert pixel coordinates to court coordinates and distance traveled, court position, ‘T’ dominance, and average speeds of elite players in squash is determined. The method was validated using results from a previous study using manual tracking where the proposed method (filtered coordinates) displayed an average absolute percent error to the manual approach of 3.73% in total distance traveled, 3.52% and 1.26% in average speeds <9 m/s with and without speeds <1 m/s, respectively. The method has proven to be the most effective in collecting kinematic data of elite players in squash in a timely manner with no special camera setup and limited manual intervention.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pueo, Basilio, and Jose Manuel Jimenez-Olmedo. "Application of motion capture technology for sport performance analysis (El uso de la tecnología de captura de movimiento para el análisis del rendimiento deportivo)." Retos, no. 32 (March 14, 2017): 241–47. http://dx.doi.org/10.47197/retos.v0i32.56072.

Повний текст джерела
Анотація:
In sport performance, motion capture aims at tracking and recording athletes’ human motion in real time to analyze physical condition, athletic performance, technical expertise and injury mechanism, prevention and rehabilitation. The aim of this paper is to systematically review the latest developments of motion capture systems for the analysis of sport performance. To that end, selected keywords were searched on studies published in the last four years in the electronic databases ISI Web of Knowledge, Scopus, PubMed and SPORTDiscus, which resulted in 892 potential records. After duplicate removal and screening of the remaining records, 81 journal papers were retained for inclusion in this review, distributed as 53 records for optical systems, 15 records for non-optical systems and 13 records for markerless systems. Resultant records were screened to distribute them according to the following analysis categories: biomechanical motion analysis, validation of new systems and performance enhancement. Although optical systems are regarded as golden standard with accurate results, the cost of equipment and time needed to capture and postprocess data have led researchers to test other technologies. First, non-optical systems rely on attaching sensors to body parts to send their spatial information to computer wirelessly by means of different technologies, such as electromagnetic and inertial (accelerometry). Finally, markerless systems are adequate for free, unobstructive motion analysis since no attachment is carried by athletes. However, more sensors and sophisticated signal processing must be used to increase the expected level of accuracy.Resumen: En el ámbito del rendimiento deportivo, el objetivo de la captura de movimiento es seguir y registrar el movimiento humano de deportistas para analizar su condición física, rendimiento, técnica y el origen, prevención y rehabilitación de lesiones. En este artículo, se realiza una revisión sistemática de los últimos avances en sistemas de captura de movimiento para el análisis del rendimiento deportivo. Para ello, se buscaron palabras clave en estudios publicados en los últimos cuatro años en las bases de datos electrónicas ISI Web of Knowledge, Scopus, PubMed y SPORTDiscus, dando lugar a 892 registros. Tras borrar duplicados y análisis del resto, se seleccionaron 81 artículos de revista, distribuidos en 53 registros para sistemas ópticos, 15 para sistemas no ópticos y 13 para sistemas sin marcadores. Los registros se clasificaron según las categorías: análisis biomecánico, validación de nuevos sistemas y mejora del rendimiento. Aunque los sistemas ópticos son los sistemas de referencia por su precisión, el coste del equipamiento y el tiempo invertido en la captura y postprocesado ha llevado a los investigadores a probar otras tecnologías. En primer lugar, los sistemas no ópticos se basan en adherir sensores a zonas corporales para mandar su información espacial a un ordenador mediante distintas tecnologías, tales como electromagnética y inercial (acelerometría). Finalmente, los sistemas sin marcadores permiten un análisis del movimiento sin restricciones ya que los deportistas no llevan adherido ningún elemento. Sin embargo, se necesitan más sensores y un procesado de señal avanzado para aumentar el nivel de precisión necesario.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, L., B. Wu, and Y. Zhao. "A REAL-TIME PHOTOGRAMMETRIC SYSTEM FOR MONITORING HUMAN MOVEMENT DYNAMICS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 561–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-561-2020.

Повний текст джерела
Анотація:
Abstract. The human body posture is rich with dynamic information that can be captured by algorithms, and many applications rely on this type of data (e.g., action recognition, people re-identification, human-computer interaction, industrial robotics). The recent development of smart cameras and affordable red-green-blue-depth (RGB-D) sensors has enabled cost-efficient estimation and tracking of human body posture. However, the reliability of single sensors is often insufficient due to occlusion problems, field-of-view limitations, and the limited measurement distances of the RGB-depth sensors. Furthermore, a large-scale real-time response is often required in certain applications, such as physical rehabilitation, where human actions must be detected and monitored over time, or in industries where human motion is monitored to maintain predictable movement flow in a shared workspace. Large-scale markerless motion-capture systems have therefore received extensive research attention in recent years.In this paper, we propose a real-time photogrammetric system that incorporates multithreading and a graphic process unit (GPU)-accelerated solution for extracting 3D human body dynamics in real-time. The system includes a stereo camera with preliminary calibration, from which left-view and right-view frames are loaded. Then, a dense image-matching algorithm is married with GPU acceleration to generate a real-time disparity map, which is further extended to a 3D map array obtained by photogrammetric processing based on the camera orientation parameters. The 3D body features are acquired from 2D body skeletons extracted from regional multi-person pose estimation (RMPE) and the corresponding 3D coordinates of each joint in the 3D map array. These 3D body features are then extracted and visualised in real-time by multithreading, from which human movement dynamics (e.g., moving speed, knee pressure angle) are derived. The results reveal that the process rate (pose frame-rate) can be 20 fps (frames per second) or above in our experiments (using two NVIDIA 2080Ti and two 12-core CPUs) depending on the GPU exploited by the detector, and the monitoring distance can reach 15 m with a geometric accuracy better than 1% of the distance.This real-time photogrammetric system is an effective real-time solution to monitor 3D human body dynamics. It uses low-cost RGB stereo cameras controlled by consumer GPU-enabled computers, and no other specialised hardware is required. This system has great potential for applications such as motion tracking, 3D body information extraction and human dynamics monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Manni, Francesca, Fons van der Sommen, Svitlana Zinger, Caifeng Shan, Ronald Holthuizen, Marco Lai, Gustav Buström, et al. "Hyperspectral Imaging for Skin Feature Detection: Advances in Markerless Tracking for Spine Surgery." Applied Sciences 10, no. 12 (June 12, 2020): 4078. http://dx.doi.org/10.3390/app10124078.

Повний текст джерела
Анотація:
In spinal surgery, surgical navigation is an essential tool for safe intervention, including the placement of pedicle screws without injury to nerves and blood vessels. Commercially available systems typically rely on the tracking of a dynamic reference frame attached to the spine of the patient. However, the reference frame can be dislodged or obscured during the surgical procedure, resulting in loss of navigation. Hyperspectral imaging (HSI) captures a large number of spectral information bands across the electromagnetic spectrum, providing image information unseen by the human eye. We aim to exploit HSI to detect skin features in a novel methodology to track patient position in navigated spinal surgery. In our approach, we adopt two local feature detection methods, namely a conventional handcrafted local feature and a deep learning-based feature detection method, which are compared to estimate the feature displacement between different frames due to motion. To demonstrate the ability of the system in tracking skin features, we acquire hyperspectral images of the skin of 17 healthy volunteers. Deep-learned skin features are detected and localized with an average error of only 0.25 mm, outperforming the handcrafted local features with respect to the ground truth based on the use of optical markers.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Cano, Alberto, Enrique Yeguas-Bolivar, Rafael Muñoz-Salinas, Rafael Medina-Carnicer, and Sebastián Ventura. "Parallelization strategies for markerless human motion capture." Journal of Real-Time Image Processing 14, no. 2 (November 12, 2014): 453–67. http://dx.doi.org/10.1007/s11554-014-0467-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Scott, Bradley, Martin Seyres, Fraser Philp, Edward K. Chadwick, and Dimitra Blana. "Healthcare applications of single camera markerless motion capture: a scoping review." PeerJ 10 (May 26, 2022): e13517. http://dx.doi.org/10.7717/peerj.13517.

Повний текст джерела
Анотація:
Background Single camera markerless motion capture has the potential to facilitate at home movement assessment due to the ease of setup, portability, and affordable cost of the technology. However, it is not clear what the current healthcare applications of single camera markerless motion capture are and what information is being collected that may be used to inform clinical decision making. This review aims to map the available literature to highlight potential use cases and identify the limitations of the technology for clinicians and researchers interested in the collection of movement data. Survey Methodology Studies were collected up to 14 January 2022 using Pubmed, CINAHL and SPORTDiscus using a systematic search. Data recorded included the description of the markerless system, clinical outcome measures, and biomechanical data mapped to the International Classification of Functioning, Disability and Health Framework (ICF). Studies were grouped by patient population. Results A total of 50 studies were included for data collection. Use cases for single camera markerless motion capture technology were identified for Neurological Injury in Children and Adults; Hereditary/Genetic Neuromuscular Disorders; Frailty; and Orthopaedic or Musculoskeletal groups. Single camera markerless systems were found to perform well in studies involving single plane measurements, such as in the analysis of infant general movements or spatiotemporal parameters of gait, when evaluated against 3D marker-based systems and a variety of clinical outcome measures. However, they were less capable than marker-based systems in studies requiring the tracking of detailed 3D kinematics or fine movements such as finger tracking. Conclusions Single camera markerless motion capture offers great potential for extending the scope of movement analysis outside of laboratory settings in a practical way, but currently suffers from a lack of accuracy where detailed 3D kinematics are required for clinical decision making. Future work should therefore focus on improving tracking accuracy of movements that are out of plane relative to the camera orientation or affected by occlusion, such as supination and pronation of the forearm.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhang, Dianyong, Zhenjiang Miao, and Shengyong Chen. "Human Model Adaptation for Multiview Markerless Motion Capture." Mathematical Problems in Engineering 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/564214.

Повний текст джерела
Анотація:
An approach to automatic modeling of individual human bodies using complex shape and pose information. The aim is to address the need for human shape and pose model generation for markerless motion capture. With multi-view markerless motion capture, three-dimensional morphable models are learned from an existing database of registered body scans in different shapes and poses. We estimate the body skeleton and pose parameters from the visual hull mesh reconstructed from multiple human silhouettes. Pose variation of body shapes is implemented by the defined underlying skeleton. The shape parameters are estimated by fitting the morphable model to the silhouettes. It is done relying on extracted silhouettes only. An error function is defined to measure how well the human model fits the input data, and minimize it to get the good estimate result. Further, experiments on some data show the robustness of the method, where the body shape and the initial pose can be obtained automatically.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Li, Miaopeng, Zimeng Zhou, and Xinguo Liu. "Cross Refinement Techniques for Markerless Human Motion Capture." ACM Transactions on Multimedia Computing, Communications, and Applications 16, no. 1 (April 2, 2020): 1–18. http://dx.doi.org/10.1145/3372207.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wang, Sophie L., Gene Civillico, Wesley Niswander, and Kimberly L. Kontson. "Comparison of Motion Analysis Systems in Tracking Upper Body Movement of Myoelectric Bypass Prosthesis Users." Sensors 22, no. 8 (April 12, 2022): 2953. http://dx.doi.org/10.3390/s22082953.

Повний текст джерела
Анотація:
Current literature lacks a comparative analysis of different motion capture systems for tracking upper limb (UL) movement as individuals perform standard tasks. To better understand the performance of various motion capture systems in quantifying UL movement in the prosthesis user population, this study compares joint angles derived from three systems that vary in cost and motion capture mechanisms: a marker-based system (Vicon), an inertial measurement unit system (Xsens), and a markerless system (Kinect). Ten healthy participants (5F/5M; 29.6 ± 7.1 years) were trained with a TouchBionic i-Limb Ultra myoelectric terminal device mounted on a bypass prosthetic device. Participants were simultaneously recorded with all systems as they performed standardized tasks. Root mean square error and bias values for degrees of freedom in the right elbow, shoulder, neck, and torso were calculated. The IMU system yielded more accurate kinematics for shoulder, neck, and torso angles while the markerless system performed better for the elbow angles. By evaluating the ability of each system to capture kinematic changes of simulated upper limb prosthesis users during a variety of standardized tasks, this study provides insight into the advantages and limitations of using different motion capture technologies for upper limb functional assessment.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

TAKAHASHI, Kazuhiko, Yusuke NAGASAWA, and Masafumi HASHIMOTO. "Markerless Human Motion Capture from Voxel Reconstruction with Simple Human Model." Journal of Advanced Mechanical Design, Systems, and Manufacturing 2, no. 6 (2008): 985–97. http://dx.doi.org/10.1299/jamdsm.2.985.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ferryanto, F., Andi Isra Mahyuddin, and Motomu Nakashima. "Markerless Optical Motion Capture System for Asymmetrical Swimming Stroke." Journal of Engineering and Technological Sciences 54, no. 5 (September 5, 2022): 220503. http://dx.doi.org/10.5614/j.eng.technol.sci.2022.54.5.3.

Повний текст джерела
Анотація:
This work presents the development of a markerless optical motion capture system of the front-crawl swimming stroke. The system only uses one underwater camera to record swimming motion in the sagittal plane. The participant in this experiment was a swimmer who is active in the university’s swimming club. The recorded images were then segmented to obtain silhouettes of the participant by a Gaussian Mixture Model. One of the swimming images was employed to generate a human body model that consists of 15 segments. The silhouette and model of the participant were subjected to an image matching process. The shape of the body segment was used as the feature in the image matching. The model was transformed to estimate the pose of the participant. The intraclass correlation coefficient between the results of the developed system and references were evaluated. In general, all body segments, except head and trunk, had a correlation coefficient higher than 0.95. Then, dynamics analysis by SWUM was conducted based on the joint angle acquired by the present work. The simulation implied that the developed system was suitable for daily training of athletes and coaches due to its simplicity and accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Et.al, JIBUM JUNG. "Use of Human Motion Data to Train Wearable Robots." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 11, 2021): 807–11. http://dx.doi.org/10.17762/turcomat.v12i6.2100.

Повний текст джерела
Анотація:
Development of wearable robots is accelerating. Walking robots mimic human behavior and must operate without accidents. Human motion data are needed to train these robots. We developed a system for extracting human motion data and displaying them graphically.We extracted motion data using a Perception Neuron motion capture system and used the Unity engine for the simulation. Several experiments were performed to demonstrate the accuracy of the extracted motion data.Of the various methods used to collect human motion data, markerless motion capture is highly inaccurate, while optical motion capture is very expensive, requiring several high-resolution cameras and a large number of markers. Motion capture using a magnetic field sensor is subject to environmental interference. Therefore, we used an inertial motion capture system. Each movement sequence involved four and was repeated 10 times. The data were stored and standardized. The motions of three individuals were compared to those of a reference person; the similarity exceeded 90% in all cases. Our rehabilitation robot accurately simulated human movements: individually tailored wearable robots could be designed based on our data. Safe and stable robot operation can be verified in advance via simulation. Walking stability can be increased using walking robots trained via machine learning algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Yang, Sylvia X. M., Martin S. Christiansen, Peter K. Larsen, Tine Alkjær, Thomas B. Moeslund, Erik B. Simonsen, and Niels Lynnerup. "Markerless motion capture systems for tracking of persons in forensic biomechanics: an overview." Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2, no. 1 (October 29, 2013): 46–65. http://dx.doi.org/10.1080/21681163.2013.834800.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Yeguas-Bolivar, Enrique, Rafael Muñoz-Salinas, Rafael Medina-Carnicer, and Angel Carmona-Poyato. "Comparing evolutionary algorithms and particle filters for Markerless Human Motion Capture." Applied Soft Computing 17 (April 2014): 153–66. http://dx.doi.org/10.1016/j.asoc.2014.01.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Reimer, Lara Marie, Maximilian Kapsecker, Takashi Fukushima, and Stephan M. Jonas. "Evaluating 3D Human Motion Capture on Mobile Devices." Applied Sciences 12, no. 10 (May 10, 2022): 4806. http://dx.doi.org/10.3390/app12104806.

Повний текст джерела
Анотація:
Computer-vision-based frameworks enable markerless human motion capture on consumer-grade devices in real-time. They open up new possibilities for application, such as in the health and medical sector. So far, research on mobile solutions has been focused on 2-dimensional motion capture frameworks. 2D motion analysis is limited by the viewing angle of the positioned camera. New frameworks enable 3-dimensional human motion capture and can be supported through additional smartphone sensors such as LiDAR. 3D motion capture promises to overcome the limitations of 2D frameworks by considering all three movement planes independent of the camera angle. In this study, we performed a laboratory experiment with ten subjects, comparing the joint angles in eight different body-weight exercises tracked by Apple ARKit, a mobile 3D motion capture framework, against a gold-standard system for motion capture: the Vicon system. The 3D motion capture framework exposed a weighted Mean Absolute Error of 18.80° ± 12.12° (ranging from 3.75° ± 0.99° to 47.06° ± 5.11° per tracked joint angle and exercise) and a Mean Spearman Rank Correlation Coefficient of 0.76 for the whole data set. The data set shows a high variance of those two metrics between the observed angles and performed exercises. The observed accuracy is influenced by the visibility of the joints and the observed motion. While the 3D motion capture framework is a promising technology that could enable several use cases in the entertainment, health, and medical area, its limitations should be considered for each potential application area.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zou, Beiji, Shu Chen, Cao Shi, and Umugwaneza Marie Providence. "Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking." Pattern Recognition 42, no. 7 (July 2009): 1559–71. http://dx.doi.org/10.1016/j.patcog.2008.12.024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Shingade, Ashish, and Archana Ghotkar. "Animation of 3D Human Model Using Markerless Motion Capture Applied To Sports." International Journal of Computer Graphics & Animation 4, no. 1 (January 31, 2014): 27–39. http://dx.doi.org/10.5121/ijcga.2014.4103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Molet, Tom, Ronan Boulic, and Daniel Thalmann. "Human Motion Capture Driven by Orientation Measurements." Presence: Teleoperators and Virtual Environments 8, no. 2 (April 1999): 187–203. http://dx.doi.org/10.1162/105474699566161.

Повний текст джерела
Анотація:
Motion-capture techniques are rarely based on orientation measurements for two main reasons: (1) optical motion-capture systems are designed for tracking object position rather than their orientation (which can be deduced from several trackers), (2) known animation techniques, like inverse kinematics or geometric algorithms, require position targets constantly, but orientation inputs only occasionally. We propose a complete human motion-capture technique based essentially on orientation measurements. The position measurement is used only for recovering the global position of the performer. This method allows fast tracking of human gestures for interactive applications as well as high rate recording. Several motion-capture optimizations, including the multijoint technique, improve the posture realism. This work is well suited for magnetic-based systems that rely more on orientation registration (in our environment) than position measurements that necessitate difficult system calibration.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Saini, Sanjay, Dayang Rohaya Bt Awang Rambli, Suziah Bt Sulaiman, M. Nordin B. Zakaria, and Siti Rohkmah. "Markerless Multi-view Human Motion Tracking Using Manifold Model Learning by Charting." Procedia Engineering 41 (2012): 664–70. http://dx.doi.org/10.1016/j.proeng.2012.07.227.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Saini, Sanjay, Nordin Zakaria, Dayang Rohaya Awang Rambli, and Suziah Sulaiman. "Markerless Human Motion Tracking Using Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization." PLOS ONE 10, no. 5 (May 15, 2015): e0127833. http://dx.doi.org/10.1371/journal.pone.0127833.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Moro, Matteo, Giorgia Marchesi, Filip Hesse, Francesca Odone, and Maura Casadio. "Markerless vs. Marker-Based Gait Analysis: A Proof of Concept Study." Sensors 22, no. 5 (March 4, 2022): 2011. http://dx.doi.org/10.3390/s22052011.

Повний текст джерела
Анотація:
The analysis of human gait is an important tool in medicine and rehabilitation to evaluate the effects and the progression of neurological diseases resulting in neuromotor disorders. In these fields, the gold standard techniques adopted to perform gait analysis rely on motion capture systems and markers. However, these systems present drawbacks: they are expensive, time consuming and they can affect the naturalness of the motion. For these reasons, in the last few years, considerable effort has been spent to study and implement markerless systems based on videography for gait analysis. Unfortunately, only few studies quantitatively compare the differences between markerless and marker-based systems in 3D settings. This work presented a new RGB video-based markerless system leveraging computer vision and deep learning to perform 3D gait analysis. These results were compared with those obtained by a marker-based motion capture system. To this end, we acquired simultaneously with the two systems a multimodal dataset of 16 people repeatedly walking in an indoor environment. With the two methods we obtained similar spatio-temporal parameters. The joint angles were comparable, except for a slight underestimation of the maximum flexion for ankle and knee angles. Taking together these results highlighted the possibility to adopt markerless technique for gait analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Schönauer, Christian, and Hannes Kaufmann. "Wide Area Motion Tracking Using Consumer Hardware." International Journal of Virtual Reality 12, no. 1 (January 1, 2013): 57–65. http://dx.doi.org/10.20870/ijvr.2013.12.1.2858.

Повний текст джерела
Анотація:
In this paper we present a wide area tracking system based on consumer hardware and available motion capture modules and middleware. We are using multiple depth cameras for human pose tracking in order to increase the captured space. Commercially available cameras can capture human movements in a non-intrusive way, while associated software-modules produce pose information of a simplified skeleton model. We calibrate the cameras relatively to each other to seamlessly combine their tracking data. Our design allows an arbitrary number of sensors to be integrated and used in parallel over a local area network. This enables us to capture human movements in a large arbitrarily shaped area. In addition we can improve motion capture data in regions, where the field of view of multiple cameras overlaps, by mutually completing partly occluded poses. In various examples we demonstrate, how human pose data is being merged in order to cover a wide area and how this data can easily be used for character animation in a virtual environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Wan, Chengkai, Baozong Yuan, and Zhenjiang Miao. "Markerless human body motion capture using Markov random field and dynamic graph cuts." Visual Computer 24, no. 5 (January 8, 2008): 373–80. http://dx.doi.org/10.1007/s00371-007-0195-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Röhling, Hanna Marie, Patrik Althoff, Radina Arsenova, Daniel Drebinger, Norman Gigengack, Anna Chorschew, Daniel Kroneberg, et al. "Proposal for Post Hoc Quality Control in Instrumented Motion Analysis Using Markerless Motion Capture: Development and Usability Study." JMIR Human Factors 9, no. 2 (April 1, 2022): e26825. http://dx.doi.org/10.2196/26825.

Повний текст джерела
Анотація:
Background Instrumented assessment of motor symptoms has emerged as a promising extension to the clinical assessment of several movement disorders. The use of mobile and inexpensive technologies such as some markerless motion capture technologies is especially promising for large-scale application but has not transitioned into clinical routine to date. A crucial step on this path is to implement standardized, clinically applicable tools that identify and control for quality concerns. Objective The main goal of this study comprises the development of a systematic quality control (QC) procedure for data collected with markerless motion capture technology and its experimental implementation to identify specific quality concerns and thereby rate the usability of recordings. Methods We developed a post hoc QC pipeline that was evaluated using a large set of short motor task recordings of healthy controls (2010 recordings from 162 subjects) and people with multiple sclerosis (2682 recordings from 187 subjects). For each of these recordings, 2 raters independently applied the pipeline. They provided overall usability decisions and identified technical and performance-related quality concerns, which yielded respective proportions of their occurrence as a main result. Results The approach developed here has proven user-friendly and applicable on a large scale. Raters’ decisions on recording usability were concordant in 71.5%-92.3% of cases, depending on the motor task. Furthermore, 39.6%-85.1% of recordings were concordantly rated as being of satisfactory quality whereas in 5.0%-26.3%, both raters agreed to discard the recording. Conclusions We present a QC pipeline that seems feasible and useful for instant quality screening in the clinical setting. Results confirm the need of QC despite using standard test setups, testing protocols, and operator training for the employed system and by extension, for other task-based motor assessment technologies. Results of the QC process can be used to clean existing data sets, optimize quality assurance measures, as well as foster the development of automated QC approaches and therefore improve the overall reliability of kinematic data sets.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Ferryanto, F., Andi Isra Mahyuddin, and Motomu Nakashima. "DEVELOPMENT OF A MARKERLESS OPTICAL MOTION CAPTURE SYSTEM BY AN ACTION SPORTS CAMERA FOR RUNNING MOTION." ASEAN Engineering Journal 12, no. 2 (June 1, 2022): 37–44. http://dx.doi.org/10.11113/aej.v12.16760.

Повний текст джерела
Анотація:
A marker-based optical motion capture system is often used to obtain the kinematics parameters of a running analysis. However, the attached marker could affect the participant's movement, and the system is costly because of the exclusive cameras. Due to its drawbacks, the present research aimed to develop an affordable markerless optical motion capture system for running motion. The proposed system used an action sports camera to acquire the running images of the participant. The images were segmented to get the silhouette of the participant. Then, a human body model was generated to provide a priori information to track participants' segment position. The subsequent procedure was image registration to estimate the pose of the participant's silhouette. The transformation parameters were estimated by particle swarm optimization. The optimization output in the form of the rotation angle of the body segment was then employed to identify right or left lower limbs. To validate the results of the optimization, a manual matching was conducted to obtain the actual rotation angle for all body segments. The correlation coefficient between the rotation angle from image registration and the actual rotation angle was then evaluated. It was found that the lowest correlation coefficient was 0.977 for the left foot. It implies that the accuracy of the developed system in the present work is acceptable. Furthermore, the results of the kinematics analysis have good agreement with the literature. Therefore, the developed system, not only yields acceptable running parameters, but also affordable since it uses an action sports camera and easy to use.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Bittner, Marian, Wei-Tse Yang, Xucong Zhang, Ajay Seth, Jan van van Gemert, and Frans C. T. van der van der Helm. "Towards Single Camera Human 3D-Kinematics." Sensors 23, no. 1 (December 28, 2022): 341. http://dx.doi.org/10.3390/s23010341.

Повний текст джерела
Анотація:
Markerless estimation of 3D Kinematics has the great potential to clinically diagnose and monitor movement disorders without referrals to expensive motion capture labs; however, current approaches are limited by performing multiple de-coupled steps to estimate the kinematics of a person from videos. Most current techniques work in a multi-step approach by first detecting the pose of the body and then fitting a musculoskeletal model to the data for accurate kinematic estimation. Errors in training data of the pose detection algorithms, model scaling, as well the requirement of multiple cameras limit the use of these techniques in a clinical setting. Our goal is to pave the way toward fast, easily applicable and accurate 3D kinematic estimation . To this end, we propose a novel approach for direct 3D human kinematic estimation D3KE from videos using deep neural networks. Our experiments demonstrate that the proposed end-to-end training is robust and outperforms 2D and 3D markerless motion capture based kinematic estimation pipelines in terms of joint angles error by a large margin (35% from 5.44 to 3.54 degrees). We show that D3KE is superior to the multi-step approach and can run at video framerate speeds. This technology shows the potential for clinical analysis from mobile devices in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

OIDA, Takashi, Junichiro HORI, Kazuhiko TAKAHASHI, and Masafumi HASHIMOTO. "Markerless Human Motion Capture Using Multiple Images of 3D Articulated Human CG Model(Mechanical Systems)." Transactions of the Japan Society of Mechanical Engineers Series C 76, no. 772 (2010): 3422–29. http://dx.doi.org/10.1299/kikaic.76.3422.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Yoon, Soocheol, Ya-Shian Li-Baboud, Ann Virts, Roger Bostelman, and Mili Shah. "Feasibility of using depth cameras for evaluating human - exoskeleton interaction." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 66, no. 1 (September 2022): 1892–96. http://dx.doi.org/10.1177/1071181322661190.

Повний текст джерела
Анотація:
With the increased use of exoskeletons in a variety of fields such as industry, military, and health care, there is a need for measurement standards to understand the effects of exoskeletons on human motion. Optical tracking systems (OTS) provide high accuracy human motion tracking, but are expensive, require markers, and constrain the tests to a specified area where the cameras can provide sufficient coverage. This study describes the feasibility of using lower cost, portable, markerless depth camera systems for measuring human and exoskeleton 3-dimensional (3D) joint positions and angles. A human performing a variety of industrial tasks while wearing three different exoskeletons was tracked by both an OTS with modified skeletal models and a depth camera body tracking system. A comparison of the acquired data was then used to facilitate discussions regarding the potential use of depth cameras for exoskeleton evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chen, Pengzhan, Ye Kuang, and Jie Li. "Human Motion Capture Algorithm Based on Inertial Sensors." Journal of Sensors 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/4343797.

Повний текст джерела
Анотація:
On the basis of inertial navigation, we conducted a comprehensive analysis of the human body kinematics principle. From the direction of two characteristic parameters, namely, displacement and movement angle, we calculated the attitude of a node during the human motion capture process by combining complementary and Kalman filters. Then, we evaluated the performance of the proposed attitude strategy by selecting different platforms as the validation object. Results show that the proposed strategy for the real-time tracking of the human motion process has higher accuracy than the traditional strategy.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Marin, Frédéric. "Human and Animal Motion Tracking Using Inertial Sensors." Sensors 20, no. 21 (October 26, 2020): 6074. http://dx.doi.org/10.3390/s20216074.

Повний текст джерела
Анотація:
Motion is key to health and wellbeing, something we are particularly aware of in times of lockdowns and restrictions on movement. Considering the motion of humans and animals as a biomarker of the performance of the neuro-musculoskeletal system, its analysis covers a large array of research fields, such as sports, equine science and clinical applications, but also innovative methods and workplace analysis. In this Special Issue of Sensors, we focused on human and animal motion-tracking using inertial sensors. Ten research and two review papers, mainly on human movement, but also on the locomotion of the horse, were selected. The selection of articles in this Special Issue aims to display current innovative approaches exploring hardware and software solutions deriving from inertial sensors related to motion capture and analysis. The selected sample shows that the versatility and pervasiveness of inertial sensors has great potential for the years to come, as, for now, limitations and room for improvement still remain.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Qian, Huizu, Benbin Chen, Xuke Xia, Shengzhong Deng, and Yuxiang Wang. "D-H Parameter Method-based Wearable Motion Tracking." Journal of Physics: Conference Series 2216, no. 1 (March 1, 2022): 012027. http://dx.doi.org/10.1088/1742-6596/2216/1/012027.

Повний текст джерела
Анотація:
Abstract Motion capture is a key technology for robots to accurately understand pedestrian intentions in the scene of human-machine integration. Due to the limited spatial distance and easy obstruction by obstacles, traditional optical motion capture systems often lose detection targets. This paper proposes a wearable motion tracking method based on D-H parameter method. By binding multiple wireless inertial sensor units composed of accelerometers, magnetic flux sensors and gyroscopes to various moving parts of the user’s body, accurate and robust tracking of moving targets is achieved. This method uses the known pose information of the root node to find the pose state of each level in the reference coordinate system, and establishes the human body joint rotation model and the bone position state model. The results show that the motion tracking method proposed in this paper reduces 9 degrees of freedom compared with the traditional forward kinematics method, and the algorithm efficiency is increased by about 20%, which can accurately obtain the posture characteristics of the human body. It can be seen that the D-H parameter method is reasonable for the wearable human body motion tracking.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Stenum, Jan, Cristina Rossi, and Ryan T. Roemmich. "Two-dimensional video-based analysis of human gait using pose estimation." PLOS Computational Biology 17, no. 4 (April 23, 2021): e1008935. http://dx.doi.org/10.1371/journal.pcbi.1008935.

Повний текст джерела
Анотація:
Human gait analysis is often conducted in clinical and basic research, but many common approaches (e.g., three-dimensional motion capture, wearables) are expensive, immobile, data-limited, and require expertise. Recent advances in video-based pose estimation suggest potential for gait analysis using two-dimensional video collected from readily accessible devices (e.g., smartphones). To date, several studies have extracted features of human gait using markerless pose estimation. However, we currently lack evaluation of video-based approaches using a dataset of human gait for a wide range of gait parameters on a stride-by-stride basis and a workflow for performing gait analysis from video. Here, we compared spatiotemporal and sagittal kinematic gait parameters measured with OpenPose (open-source video-based human pose estimation) against simultaneously recorded three-dimensional motion capture from overground walking of healthy adults. When assessing all individual steps in the walking bouts, we observed mean absolute errors between motion capture and OpenPose of 0.02 s for temporal gait parameters (i.e., step time, stance time, swing time and double support time) and 0.049 m for step lengths. Accuracy improved when spatiotemporal gait parameters were calculated as individual participant mean values: mean absolute error was 0.01 s for temporal gait parameters and 0.018 m for step lengths. The greatest difference in gait speed between motion capture and OpenPose was less than 0.10 m s−1. Mean absolute error of sagittal plane hip, knee and ankle angles between motion capture and OpenPose were 4.0°, 5.6° and 7.4°. Our analysis workflow is freely available, involves minimal user input, and does not require prior gait analysis expertise. Finally, we offer suggestions and considerations for future applications of pose estimation for human gait analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Johnson, Caleb D., Jereme Outerleys, and Irene S. Davis. "Agreement Between Sagittal Foot and Tibia Angles During Running Derived From an Open-Source Markerless Motion Capture Platform and Manual Digitization." Journal of Applied Biomechanics 38, no. 2 (April 1, 2022): 111–16. http://dx.doi.org/10.1123/jab.2021-0323.

Повний текст джерела
Анотація:
Several open-source platforms for markerless motion capture offer the ability to track 2-dimensional (2D) kinematics using simple digital video cameras. We sought to establish the performance of one of these platforms, DeepLabCut. Eighty-four runners who had sagittal plane videos recorded of their left lower leg were included in the study. Data from 50 participants were used to train a deep neural network for 2D pose estimation of the foot and tibia segments. The trained model was used to process novel videos from 34 participants for continuous 2D coordinate data. Overall network accuracy was assessed using the train/test errors. Foot and tibia angles were calculated for 7 strides using manual digitization and markerless methods. Agreement was assessed with mean absolute differences and intraclass correlation coefficients. Bland–Altman plots and paired t tests were used to assess systematic bias. The train/test errors for the trained network were 2.87/7.79 pixels, respectively (0.5/1.2 cm). Compared to manual digitization, the markerless method was found to systematically overestimate foot angles and underestimate tibial angles (P < .01, d = 0.06–0.26). However, excellent agreement was found between the segment calculation methods, with mean differences ≤1° and intraclass correlation coefficients ≥.90. Overall, these results demonstrate that open-source, markerless methods are a promising new tool for analyzing human motion.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Phan, Gia-Hoang, Clint Hansen, Paolo Tommasino, Asif Hussain, Domenico Formica, and Domenico Campolo. "A Complementary Filter Design on SE(3) to Identify Micro-Motions during 3D Motion Tracking." Sensors 20, no. 20 (October 16, 2020): 5864. http://dx.doi.org/10.3390/s20205864.

Повний текст джерела
Анотація:
In 3D motion capture, multiple methods have been developed in order to optimize the quality of the captured data. While certain technologies, such as inertial measurement units (IMU), are mostly suitable for 3D orientation estimation at relatively high frequencies, other technologies, such as marker-based motion capture, are more suitable for 3D position estimations at a lower frequency range. In this work, we introduce a complementary filter that complements 3D motion capture data with high-frequency acceleration signals from an IMU. While the local optimization reduces the error of the motion tracking, the additional accelerations can help to detect micro-motions that are useful when dealing with high-frequency human motions or robotic applications. The combination of high-frequency accelerometers improves the accuracy of the data and helps to overcome limitations in motion capture when micro-motions are not traceable with 3D motion tracking system. In our experimental evaluation, we demonstrate the improvements of the motion capture results during translational, rotational, and combined movements.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Li, Jia, ChengKai Wan, DianYong Zhang, ZhenJiang Miao, and BaoZong Yuan. "Markerless human motion capture by Markov random field and dynamic graph cuts with color constraints." Science in China Series F: Information Sciences 52, no. 2 (January 23, 2009): 252–59. http://dx.doi.org/10.1007/s11432-009-0040-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jiang, Fei, Ying Jie Yu, and Da Wei Yan. "Research on Wearable Human Motion Capture and Virtual Control." Applied Mechanics and Materials 686 (October 2014): 121–25. http://dx.doi.org/10.4028/www.scientific.net/amm.686.121.

Повний текст джерела
Анотація:
This paper designed the posture initialization calibration method by the inertial sensor in human limb movement with any attitude toward. By initializing the target specific actions can be implemented to identify timing corresponding sensors and joint, and calculate the coordinate transformation relation of human skeletal coordinates corresponding to each inertial sensor's coordinate system and the 3D human skeleton model. Then through the coordinate conversion of inertial sensor attitude coordinates and depth first traversal calculation on human skeletal tree, real-time update of human motion body attitude data, driven simulation of human skeletal model by human motion, realize the real-time tracking of motion capture.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Cheung, Kong-man (German), Simon Baker, and Takeo Kanade. "Shape-From-Silhouette Across Time Part II: Applications to Human Modeling and Markerless Motion Tracking." International Journal of Computer Vision 63, no. 3 (April 1, 2005): 225–45. http://dx.doi.org/10.1007/s11263-005-6879-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Fu, Qiang, Xingui Zhang, Jinxiu Xu, and Haimin Zhang. "Capture of 3D Human Motion Pose in Virtual Reality Based on Video Recognition." Complexity 2020 (November 20, 2020): 1–17. http://dx.doi.org/10.1155/2020/8857748.

Повний текст джерела
Анотація:
Motion pose capture technology can effectively solve the problem of difficulty in defining character motion in the process of 3D animation production and greatly reduce the workload of character motion control, thereby improving the efficiency of animation development and the fidelity of character motion. Motion gesture capture technology is widely used in virtual reality systems, virtual training grounds, and real-time tracking of the motion trajectories of general objects. This paper proposes an attitude estimation algorithm adapted to be embedded. The previous centralized Kalman filter is divided into two-step Kalman filtering. According to the different characteristics of the sensors, they are processed separately to isolate the cross-influence between sensors. An adaptive adjustment method based on fuzzy logic is proposed. The acceleration, angular velocity, and geomagnetic field strength of the environment are used as the input of fuzzy logic to judge the motion state of the carrier and then adjust the covariance matrix of the filter. The adaptive adjustment of the sensor is converted to the recognition of the motion state. For the study of human motion posture capture, this paper designs a verification experiment based on the existing robotic arm in the laboratory. The experiment shows that the studied motion posture capture method has better performance. The human body motion gesture is designed for capturing experiments, and the capture results show that the obtained pose angle information can better restore the human body motion. A visual model of human motion posture capture was established, and after comparing and analyzing with the real situation, it was found that the simulation approach reproduced the motion process of human motion well. For the research of human motion recognition, this paper designs a two-classification model and human daily behaviors for experiments. Experiments show that the accuracy of the two-category human motion gesture capture and recognition has achieved good results. The experimental effect of SVC on the recognition of two classifications is excellent. In the case of using all optimization algorithms, the accuracy rate is higher than 90%, and the final recognition accuracy rate is also higher than 90%. In terms of recognition time, the time required for human motion gesture capture and recognition is less than 2 s.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Shintemirov, Almas, Tasbolat Taunyazov, Bukeikhan Omarali, Aigerim Nurbayeva, Anton Kim, Askhat Bukeyev, and Matteo Rubagotti. "An Open-Source 7-DOF Wireless Human Arm Motion-Tracking System for Use in Robotics Research." Sensors 20, no. 11 (May 29, 2020): 3082. http://dx.doi.org/10.3390/s20113082.

Повний текст джерела
Анотація:
To extend the choice of inertial motion-tracking systems freely available to researchers and educators, this paper presents an alternative open-source design of a wearable 7-DOF wireless human arm motion-tracking system. Unlike traditional inertial motion-capture systems, the presented system employs a hybrid combination of two inertial measurement units and one potentiometer for tracking a single arm. The sequence of three design phases described in the paper demonstrates how the general concept of a portable human arm motion-tracking system was transformed into an actual prototype, by employing a modular approach with independent wireless data transmission to a control PC for signal processing and visualization. Experimental results, together with an application case study on real-time robot-manipulator teleoperation, confirm the applicability of the developed arm motion-tracking system for facilitating robotics research. The presented arm-tracking system also has potential to be employed in mechatronic system design education and related research activities. The system CAD design models and program codes are publicly available online and can be used by robotics researchers and educators as a design platform to build their own arm-tracking solutions for research and educational purposes.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Li, Xiangyang, Rui Wang, Zhe Xu, Lei Pan, and Zhili Zhang. "Data compensation based on the additional feature information for collaborative interactive operation with optical human motion capture system." International Journal of Modeling, Simulation, and Scientific Computing 09, no. 06 (December 2018): 1850060. http://dx.doi.org/10.1142/s1793962318500605.

Повний текст джерела
Анотація:
As the effective capture region of optical motion capture system is limited by quantity, installation mode, resolution and focus of infrared cameras, the reflective markers on certain body parts (such as wrists, elbows, etc.) of multi-actual trainees may be obscured when they perform the collaborative interactive operation. To address this issue, motion data compensation method based on the additional feature information provided by the electromagnetic spatial position tracking equipment is proposed in this paper. The main working principle and detailed realization process of the proposed method are introduced step by step, and the practical implementation is presented to illustrate its validity and efficiency. The results show that the missing capture data and motion information of relevant obscured markers on arms can be retrieved with the proposed method, which can avoid the simulation motions of corresponding virtual operators being interrupted and deformed during the collaborative interactive operation process performed by multi-actual trainees with optical human motion capture system in a limited capture range.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Liu, Chen, Anna Wang, Chunguang Bu, Wenhui Wang, and Haijing Sun. "Human Motion Tracking with Less Constraint of Initial Posture from a Single RGB-D Sensor." Sensors 21, no. 9 (April 26, 2021): 3029. http://dx.doi.org/10.3390/s21093029.

Повний текст джерела
Анотація:
High-quality and complete human motion 4D reconstruction is of great significance for immersive VR and even human operation. However, it has inevitable self-scanning constraints, and tracking under monocular settings also has strict restrictions. In this paper, we propose a human motion capture system combined with human priors and performance capture that only uses a single RGB-D sensor. To break the self-scanning constraint, we generated a complete mesh only using the front view input to initialize the geometric capture. In order to construct a correct warping field, most previous methods initialize their systems in a strict way. To maintain high fidelity while increasing the easiness of the system, we updated the model while capturing motion. Additionally, we blended in human priors in order to improve the reliability of model warping. Extensive experiments demonstrated that our method can be used more comfortably while maintaining credible geometric warping and remaining free of self-scanning constraints.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Saini, Sanjay, Dayang Rohaya Bt Awang Rambli, M. Nordin B. Zakaria, and Suziah Bt Sulaiman. "A Review on Particle Swarm Optimization Algorithm and Its Variants to Human Motion Tracking." Mathematical Problems in Engineering 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/704861.

Повний текст джерела
Анотація:
Automatic human motion tracking in video sequences is one of the most frequently tackled tasks in computer vision community. The goal of human motion capture is to estimate the joints angles of human body at any time. However, this is one of the most challenging problem in computer vision and pattern recognition due to the high-dimensional search space, self-occlusion, and high variability in human appearance. Several approaches have been proposed in the literature using different techniques. However, conventional approaches such as stochastic particle filtering have shortcomings in computational cost, slowness of convergence, suffers from the curse of dimensionality and demand a high number of evaluations to achieve accurate results. Particle swarm optimization (PSO) is a population-based globalized search algorithm which has been successfully applied to address human motion tracking problem and produced better results in high-dimensional search space. This paper presents a systematic literature survey on the PSO algorithm and its variants to human motion tracking. An attempt is made to provide a guide for the researchers working in the field of PSO based human motion tracking from video sequences. Additionally, the paper also presents the performance of various model evaluation search strategies within PSO tracking framework for 3D pose tracking.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tuli, Tadele Belay, and Martin Manns. "Real-Time Motion Tracking for Humans and Robots in a Collaborative Assembly Task." Proceedings 42, no. 1 (November 14, 2019): 48. http://dx.doi.org/10.3390/ecsa-6-06636.

Повний текст джерела
Анотація:
Human-robot collaboration combines the extended capabilities of humans and robots to create a more inclusive and human-centered production system in the future. However, human safety is the primary concern for manufacturing industries. Therefore, real-time motion tracking is necessary to identify if the human worker body parts enter the restricted working space solely dedicated to the robot. Tracking these motions using decentralized and different tracking systems requires a generic model controller and consistent motion exchanging formats. In this work, our task is to investigate a concept for a unified real-time motion tracking for human-robot collaboration. In this regard, a low cost and game-based motion tracking system, e.g., HTC Vive, is utilized to capture human motion by mapping into a digital human model in the Unity3D environment. In this context, the human model is described using a biomechanical model that comprises joint segments defined by position and orientation. Concerning robot motion tracking, a unified robot description format is used to describe the kinematic trees. Finally, a concept of assembly operation that involves snap joining is simulated to analyze the performance of the system in real-time capability. The distribution of joint variables in spatial-space and time-space is analyzed. The results suggest that real-time tracking in human-robot collaborative assembly environments can be considered to maximize the safety of the human worker. However, the accuracy and reliability of the system regarding system disturbances need to be justified.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії