Добірка наукової літератури з теми "Human Motion Tracking, Markerless Motion Capture"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Human Motion Tracking, Markerless Motion Capture".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Human Motion Tracking, Markerless Motion Capture"

1

SNIDARO, LAURO, GIAN LUCA FORESTI, and LUCA CHITTARO. "TRACKING HUMAN MOTION FROM MONOCULAR SEQUENCES." International Journal of Image and Graphics 08, no. 03 (July 2008): 455–71. http://dx.doi.org/10.1142/s0219467808003180.

Повний текст джерела
Анотація:
In recent years, analysis of human motion has become an increasingly relevant research topic with applications as diverse as animation, virtual reality, security, and advanced human-machine interfaces. In particular, motion capture systems are well known nowadays since they are used in the movie industry. These systems require expensive multi-camera setups or markers to be worn by the user. This paper describes an attempt to provide a markerless low cost and real-time solution for home users. We propose a novel approach for robust detection and tracking of the user's body joints that exploits different algorithms as different sources of information and fuses their estimates with particle filters. This system may be employed for real-time animation of VRML or X3D avatars using an off-the-shelf digital camera and a standard PC.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tay, Chuan Zhi, King Hann Lim, and Jonathan Then Sien Phang. "Markerless gait estimation and tracking for postural assessment." Multimedia Tools and Applications 81, no. 9 (February 21, 2022): 12777–94. http://dx.doi.org/10.1007/s11042-022-12026-8.

Повний текст джерела
Анотація:
AbstractPostural assessment is crucial in the sports screening system to reduce the risk of severe injury. The capture of the athlete’s posture using computer vision attracts huge attention in the sports community due to its markerless motion capture and less interference in the physical training. In this paper, a novel markerless gait estimation and tracking algorithm is proposed to locate human key-points in spatial-temporal sequences for gait analysis. First, human pose estimation using OpenPose network to detect 14 core key-points from the human body. The ratio of body joints is normalized with neck-to-pelvis distance to obtain camera invariant key-points. These key-points are subsequently used to generate a spatial-temporal sequences and it is fed into Long-Short-Term-Memory network for gait recognition. An indexed person is tracked for quick local pose estimation and postural analysis. This proposed algorithm can automate the capture of human joints for postural assessment to analyze the human motion. The proposed system is implemented on Intel Up Squared Board and it can achieve up to 9 frames-per-second with 95% accuracy of gait recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

SABOUNE, JAMAL, and FRANÇOIS CHARPILLET. "MARKERLESS HUMAN MOTION TRACKING FROM A SINGLE CAMERA USING INTERVAL PARTICLE FILTERING." International Journal on Artificial Intelligence Tools 16, no. 04 (August 2007): 593–609. http://dx.doi.org/10.1142/s021821300700345x.

Повний текст джерела
Анотація:
In this paper we present a new approach for marker less human motion capture from conventional camera feeds. The aim of our study is to recover 3D positions of key points of the body that can serve for gait analysis. Our approach is based on foreground extraction, an articulated body model and particle filters. In order to be generic and simple, no restrictive dynamic modeling was used. A new modified particle-filtering algorithm was introduced. It is used efficiently to search the model configurations space. This new algorithm, which we call Interval Particle Filtering, reorganizes the configurations search space in an optimal deterministic way and proved to be efficient in tracking natural human movement. Results for human motion capture from a single camera are presented and compared to results obtained from a marker based system. The system proved to be able to track motion successfully even in partial occlusions and even outdoors.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gionfrida, Letizia, Wan M. R. Rusli, Anil A. Bharath, and Angela E. Kedgley. "Validation of two-dimensional video-based inference of finger kinematics with pose estimation." PLOS ONE 17, no. 11 (November 3, 2022): e0276799. http://dx.doi.org/10.1371/journal.pone.0276799.

Повний текст джерела
Анотація:
Accurate capture finger of movements for biomechanical assessments has typically been achieved within laboratory environments through the use of physical markers attached to a participant’s hands. However, such requirements can narrow the broader adoption of movement tracking for kinematic assessment outside these laboratory settings, such as in the home. Thus, there is the need for markerless hand motion capture techniques that are easy to use and accurate enough to evaluate the complex movements of the human hand. Several recent studies have validated lower-limb kinematics obtained with a marker-free technique, OpenPose. This investigation examines the accuracy of OpenPose, when applied to images from single RGB cameras, against a ‘gold standard’ marker-based optical motion capture system that is commonly used for hand kinematics estimation. Participants completed four single-handed activities with right and left hands, including hand abduction and adduction, radial walking, metacarpophalangeal (MCP) joint flexion, and thumb opposition. The accuracy of finger kinematics was assessed using the root mean square error. Mean total active flexion was compared using the Bland–Altman approach, and the coefficient of determination of linear regression. Results showed good agreement for abduction and adduction and thumb opposition activities. Lower agreement between the two methods was observed for radial walking (mean difference between the methods of 5.03°) and MCP flexion (mean difference of 6.82°) activities, due to occlusion. This investigation demonstrated that OpenPose, applied to videos captured with monocular cameras, can be used for markerless motion capture for finger tracking with an error below 11° and on the order of that which is accepted clinically.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Connie, Tee, Timilehin B. Aderinola, Thian Song Ong, Michael Kah Ong Goh, Bayu Erfianto, and Bedy Purnama. "Pose-Based Gait Analysis for Diagnosis of Parkinson’s Disease." Algorithms 15, no. 12 (December 12, 2022): 474. http://dx.doi.org/10.3390/a15120474.

Повний текст джерела
Анотація:
Parkinson’s disease (PD) is a neurodegenerative disorder that is more common in elderly people and affects motor control, flexibility, and how easily patients adapt to their walking environments. PD is progressive in nature, and if undetected and untreated, the symptoms grow worse over time. Fortunately, PD can be detected early using gait features since the loss of motor control results in gait impairment. In general, techniques for capturing gait can be categorized as computer-vision-based or sensor-based. Sensor-based techniques are mostly used in clinical gait analysis and are regarded as the gold standard for PD detection. The main limitation of using sensor-based gait capture is the associated high cost and the technical expertise required for setup. In addition, the subjects’ consciousness of worn sensors and being actively monitored may further impact their motor function. Recent advances in computer vision have enabled the tracking of body parts in videos in a markerless motion capture scenario via human pose estimation (HPE). Although markerless motion capture has been studied in comparison with gold-standard motion-capture techniques, it is yet to be evaluated in the prediction of neurological conditions such as PD. Hence, in this study, we extract PD-discriminative gait features from raw videos of subjects and demonstrate the potential of markerless motion capture for PD prediction. First, we perform HPE on the subjects using AlphaPose. Then, we extract and analyse eight features, from which five features are systematically selected, achieving up to 93% accuracy, 96% precision, and 92% recall in arbitrary views.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Guidolin, Mattia, Emanuele Menegatti, and Monica Reggiani. "UNIPD-BPE: Synchronized RGB-D and Inertial Data for Multimodal Body Pose Estimation and Tracking." Data 7, no. 6 (June 9, 2022): 79. http://dx.doi.org/10.3390/data7060079.

Повний текст джерела
Анотація:
The ability to estimate human motion without requiring any external on-body sensor or marker is of paramount importance in a variety of fields, ranging from human–robot interaction, Industry 4.0, surveillance, and telerehabilitation. The recent development of portable, low-cost RGB-D cameras pushed forward the accuracy of markerless motion capture systems. However, despite the widespread use of such sensors, a dataset including complex scenes with multiple interacting people, recorded with a calibrated network of RGB-D cameras and an external system for assessing the pose estimation accuracy, is still missing. This paper presents the University of Padova Body Pose Estimation dataset (UNIPD-BPE), an extensive dataset for multi-sensor body pose estimation containing both single-person and multi-person sequences with up to 4 interacting people. A network with 5 Microsoft Azure Kinect RGB-D cameras is exploited to record synchronized high-definition RGB and depth data of the scene from multiple viewpoints, as well as to estimate the subjects’ poses using the Azure Kinect Body Tracking SDK. Simultaneously, full-body Xsens MVN Awinda inertial suits allow obtaining accurate poses and anatomical joint angles, while also providing raw data from the 17 IMUs required by each suit. This dataset aims to push forward the development and validation of multi-camera markerless body pose estimation and tracking algorithms, as well as multimodal approaches focused on merging visual and inertial data.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Dianyong, Zhenjiang Miao, Shengyong Chen, and Lili Wan. "Optimization and Soft Constraints for Human Shape and Pose Estimation Based on a 3D Morphable Model." Mathematical Problems in Engineering 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/715808.

Повний текст джерела
Анотація:
We propose an approach about multiview markerless motion capture based on a 3D morphable human model. This morphable model was learned from a database of registered 3D body scans in different shapes and poses. We implement pose variation of body shape by the defined underlying skeleton. At the initialization step, we adapt the 3D morphable model to the multi-view images by changing its shape and pose parameters. Then, for the tracking step, we implement a method of combining the local and global algorithm to do the pose estimation and surface tracking. And we add the human pose prior information as a soft constraint to the energy of a particle. When it meets an error after the local algorithm, we can fix the error using less particles and iterations. We demonstrate the improvements with estimating result from a multi-view image sequence.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Baclig, Maria Martine, Noah Ergezinger, Qipei Mei, Mustafa Gül, Samer Adeeb, and Lindsey Westover. "A Deep Learning and Computer Vision Based Multi-Player Tracker for Squash." Applied Sciences 10, no. 24 (December 9, 2020): 8793. http://dx.doi.org/10.3390/app10248793.

Повний текст джерела
Анотація:
Sports pose a unique challenge for high-speed, unobtrusive, uninterrupted motion tracking due to speed of movement and player occlusion, especially in the fast and competitive sport of squash. The objective of this study is to use video tracking techniques to quantify kinematics in elite-level squash. With the increasing availability and quality of elite tournament matches filmed for entertainment purposes, a new methodology of multi-player tracking for squash that only requires broadcast video as an input is proposed. This paper introduces and evaluates a markerless motion capture technique using an autonomous deep learning based human pose estimation algorithm and computer vision to detect and identify players. Inverse perspective mapping is utilized to convert pixel coordinates to court coordinates and distance traveled, court position, ‘T’ dominance, and average speeds of elite players in squash is determined. The method was validated using results from a previous study using manual tracking where the proposed method (filtered coordinates) displayed an average absolute percent error to the manual approach of 3.73% in total distance traveled, 3.52% and 1.26% in average speeds <9 m/s with and without speeds <1 m/s, respectively. The method has proven to be the most effective in collecting kinematic data of elite players in squash in a timely manner with no special camera setup and limited manual intervention.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pueo, Basilio, and Jose Manuel Jimenez-Olmedo. "Application of motion capture technology for sport performance analysis (El uso de la tecnología de captura de movimiento para el análisis del rendimiento deportivo)." Retos, no. 32 (March 14, 2017): 241–47. http://dx.doi.org/10.47197/retos.v0i32.56072.

Повний текст джерела
Анотація:
In sport performance, motion capture aims at tracking and recording athletes’ human motion in real time to analyze physical condition, athletic performance, technical expertise and injury mechanism, prevention and rehabilitation. The aim of this paper is to systematically review the latest developments of motion capture systems for the analysis of sport performance. To that end, selected keywords were searched on studies published in the last four years in the electronic databases ISI Web of Knowledge, Scopus, PubMed and SPORTDiscus, which resulted in 892 potential records. After duplicate removal and screening of the remaining records, 81 journal papers were retained for inclusion in this review, distributed as 53 records for optical systems, 15 records for non-optical systems and 13 records for markerless systems. Resultant records were screened to distribute them according to the following analysis categories: biomechanical motion analysis, validation of new systems and performance enhancement. Although optical systems are regarded as golden standard with accurate results, the cost of equipment and time needed to capture and postprocess data have led researchers to test other technologies. First, non-optical systems rely on attaching sensors to body parts to send their spatial information to computer wirelessly by means of different technologies, such as electromagnetic and inertial (accelerometry). Finally, markerless systems are adequate for free, unobstructive motion analysis since no attachment is carried by athletes. However, more sensors and sophisticated signal processing must be used to increase the expected level of accuracy.Resumen: En el ámbito del rendimiento deportivo, el objetivo de la captura de movimiento es seguir y registrar el movimiento humano de deportistas para analizar su condición física, rendimiento, técnica y el origen, prevención y rehabilitación de lesiones. En este artículo, se realiza una revisión sistemática de los últimos avances en sistemas de captura de movimiento para el análisis del rendimiento deportivo. Para ello, se buscaron palabras clave en estudios publicados en los últimos cuatro años en las bases de datos electrónicas ISI Web of Knowledge, Scopus, PubMed y SPORTDiscus, dando lugar a 892 registros. Tras borrar duplicados y análisis del resto, se seleccionaron 81 artículos de revista, distribuidos en 53 registros para sistemas ópticos, 15 para sistemas no ópticos y 13 para sistemas sin marcadores. Los registros se clasificaron según las categorías: análisis biomecánico, validación de nuevos sistemas y mejora del rendimiento. Aunque los sistemas ópticos son los sistemas de referencia por su precisión, el coste del equipamiento y el tiempo invertido en la captura y postprocesado ha llevado a los investigadores a probar otras tecnologías. En primer lugar, los sistemas no ópticos se basan en adherir sensores a zonas corporales para mandar su información espacial a un ordenador mediante distintas tecnologías, tales como electromagnética y inercial (acelerometría). Finalmente, los sistemas sin marcadores permiten un análisis del movimiento sin restricciones ya que los deportistas no llevan adherido ningún elemento. Sin embargo, se necesitan más sensores y un procesado de señal avanzado para aumentar el nivel de precisión necesario.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, L., B. Wu, and Y. Zhao. "A REAL-TIME PHOTOGRAMMETRIC SYSTEM FOR MONITORING HUMAN MOVEMENT DYNAMICS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 561–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-561-2020.

Повний текст джерела
Анотація:
Abstract. The human body posture is rich with dynamic information that can be captured by algorithms, and many applications rely on this type of data (e.g., action recognition, people re-identification, human-computer interaction, industrial robotics). The recent development of smart cameras and affordable red-green-blue-depth (RGB-D) sensors has enabled cost-efficient estimation and tracking of human body posture. However, the reliability of single sensors is often insufficient due to occlusion problems, field-of-view limitations, and the limited measurement distances of the RGB-depth sensors. Furthermore, a large-scale real-time response is often required in certain applications, such as physical rehabilitation, where human actions must be detected and monitored over time, or in industries where human motion is monitored to maintain predictable movement flow in a shared workspace. Large-scale markerless motion-capture systems have therefore received extensive research attention in recent years.In this paper, we propose a real-time photogrammetric system that incorporates multithreading and a graphic process unit (GPU)-accelerated solution for extracting 3D human body dynamics in real-time. The system includes a stereo camera with preliminary calibration, from which left-view and right-view frames are loaded. Then, a dense image-matching algorithm is married with GPU acceleration to generate a real-time disparity map, which is further extended to a 3D map array obtained by photogrammetric processing based on the camera orientation parameters. The 3D body features are acquired from 2D body skeletons extracted from regional multi-person pose estimation (RMPE) and the corresponding 3D coordinates of each joint in the 3D map array. These 3D body features are then extracted and visualised in real-time by multithreading, from which human movement dynamics (e.g., moving speed, knee pressure angle) are derived. The results reveal that the process rate (pose frame-rate) can be 20 fps (frames per second) or above in our experiments (using two NVIDIA 2080Ti and two 12-core CPUs) depending on the GPU exploited by the detector, and the monitoring distance can reach 15 m with a geometric accuracy better than 1% of the distance.This real-time photogrammetric system is an effective real-time solution to monitor 3D human body dynamics. It uses low-cost RGB stereo cameras controlled by consumer GPU-enabled computers, and no other specialised hardware is required. This system has great potential for applications such as motion tracking, 3D body information extraction and human dynamics monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Human Motion Tracking, Markerless Motion Capture"

1

Sundaresan, Aravind. "Towards markerless motion capture model estimation, initialization and tracking /." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7279.

Повний текст джерела
Анотація:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Lin. "3D Sensing and Tracking of Human Gait." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32540.

Повний текст джерела
Анотація:
Motion capture technology has been applied in many fields such as animation, medicine, military, etc. since it was first proposed in the 1970s. Based on the principles applied, motion capture technology is generally classified into six categories: 1) Optical; 2) Inertial; 3) Magnetic; 4) Mechanical; 5) Acoustic and 6) Markerless. Different from the other five kinds of motion capture technologies which try to track path of specific points with different equipment, markerless systems recognize human or non-human body's motion with vision-based technology which focuses on analyzing and processing the captured images for motion capture. The user doed not need to wear any equipment and is free to do any action in an extensible measurement area while a markerless motion capture system is working. Though this kind of system is considered as the preferred solution for motion capture, the difficulty for realizing an effective and high accuracy markerless system is much higher than the other technologies mentioned, which makes markerless motion capture development a popular research direction. Microsoft Kinect sensor has attracted lots of attention since the launch of its first version with its depth sensing feature which gives the sensor the ability to do motion capture without any extra devices. Recently, Microsoft released a new version of Kinect sensor with improved hardware and and targeted at the consumer market. However, to the best of our knowlege, the accuracy assessment of the sensor remains to be answered since it was released. In this thesis, we measure the depth accuracy of the newly released Kinect v2 depth sensor from different aspects and propose a trilateration method to improve the depth accuracy with multiple Kinects simultaneously. Based on the trilateration method, a low-cost, no wearable equipment requirement and easy setup human gait tracking system is realized.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Elanattil, Shafeeq. "Non-rigid 3D reconstruction of the human body in motion." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/205095/1/Shafeeq_Elanattil_Thesis.pdf.

Повний текст джерела
Анотація:
This thesis addresses the challenging problem of three-dimensional (3-D) reconstruction of a fast-moving human, using a single moving camera which captures both depth and colour information (RGB-D). Our objective is to find solutions to the challenges arising from the high camera motion and articulated human motions. We have developed an effective system which uses the camera pose, skeleton detection, and multi-scale information, to produce a robust reconstruction framework for 3-D modelling of fast-moving humans. The outcome of the research is useful for several applications of human performance capture systems in sports, the arts, and animation industries.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Manivasagam, Karnica. "COMPARISON OF WRIST VELOCITY MEASUREMENT METHODS: IMU, GONIOMETER AND OPTICAL MOTION CAPTURE SYSTEM." Thesis, KTH, Medicinteknik och hälsosystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287178.

Повний текст джерела
Анотація:
Repetitive tasks, awkward hand/wrist postures and forceful exertions are known risk factors for work-related musculoskeletal disorders (WMSDs) of the hand and wrist. WMSD is a major cause of long work absence, productivity loss, loss in wages and individual suffering. Currently available assessment methods of the hand/wrist motion have the limitations of being inaccurate, e.g. when using self-reports or observations, or expensive and resource-demanding for following analyses, e.g. when using the electrogoniometers. Therefore, there is a need for a risk assessment method that is easy-to-use and can be applied by both researchers and practitioners for measuring wrist angular velocity during an 8-hour working day. Wearable Inertial Measurement Units (IMU) in combination with mobile phone applications provide the possibility for such a method. In order to apply the IMU in the field for assessing the wrist velocity of different work tasks, the accuracy of the method need to be examined. Therefore, this laboratory experiment was conducted to compare a new IMU-based method with the traditional goniometer and standard optical motion capture system. The laboratory experiment was performed on twelve participants. Three standard hand movements, including hand/wrist motion of Flexion-extension (FE), Deviation, and Pronationsupination (PS) at 30, 60, 90 beat-per-minute (bpm), and three simulated work tasks were performed. The angular velocity of the three methods at 50th and 90th percentile were calculated and compared. The mean absolute error and correlation coefficient were analysed for comparing the methods. Increase in error was observed with increase in speed/bpm during the standard hand movements. For standard hand movements, comparison between IMUbyaxis and Goniometer had the smallest difference and highest correlation coefficient. For simulated work tasks, the difference between goniometer and optical system was the smallest. However, for simulated work tasks, the differences between the compared methods were in general much larger than the standard hand movements. The IMU-based method is seen to have potential when compared with the traditional measurement methods. Still, it needs further improvement to be used for risk assessment in the field.
Upprepade uppgifter, besvärliga hand- / handledsställningar och kraftfulla ansträngningar är kända riskfaktorer för arbetsrelaterade muskuloskeletala störningar (WMSD) i hand och handled. WMSD är en viktig orsak till lång frånvaro, produktivitetsförlust, löneförlust och individuellt lidande. För närvarande tillgängliga bedömningsmetoder för hand / handledsrörelser har begränsningarna att vara felaktiga, t.ex. när du använder självrapporter eller observationer, eller dyra och resurskrävande för följande analyser, t.ex. när du använder elektrogoniometrarna. Därför finns det ett behov av en riskbedömningsmetod som är enkel att använda och som kan användas av både forskare och utövare för att mäta handledens vinkelhastighet under en 8-timmars arbetsdag. Wearable Inertial Measuring Units (IMU) i kombination med mobiltelefonapplikationer ger möjlighet till en sådan metod. För att kunna använda IMU i fältet för att bedöma handledens hastighet för olika arbetsuppgifter måste metodens noggrannhet undersökas. Därför genomfördes detta laboratorieexperiment för att jämföra en ny IMU-baserad metod med den traditionella goniometern och det vanliga optiska rörelsefångningssystemet. Laboratorieexperimentet utfördes på tolv deltagare. Tre standardhandrörelser, inklusive hand / handledsrörelse av Flexion-extension (FE), Deviation och Pronation-supination (PS) vid 30, 60, 90 beat-per-minut (bpm) och tre simulerade arbetsuppgifter utfördes. Vinkelhastigheten för de tre metoderna vid 50: e och 90: e percentilen beräknades och jämfördes. Det genomsnittliga absoluta felet och korrelationskoefficienten analyserades för att jämföra metoderna. Ökning av fel observerades med ökning av hastighet/bpm under standardhandrörelserna. För standardhandrörelser hade jämförelsen mellan IMUbyaxis och Goniometer den minsta skillnaden och högsta korrelationskoefficienten. För simulerade arbetsuppgifter var skillnaden mellan goniometer och optiskt system den minsta. För simulerade arbetsuppgifter var dock skillnaderna mellan de jämförda metoderna i allmänhet mycket större än de vanliga handrörelserna. Den IMUbaserade metoden anses ha potential jämfört med traditionella mätmetoder. Ändå behöver det förbättras för att kunna användas för riskbedömning på fältet.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Efstratiou, Panagiotis. "Skeleton Tracking for Sports Using LiDAR Depth Camera." Thesis, KTH, Medicinteknik och hälsosystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297536.

Повний текст джерела
Анотація:
Skeletal tracking can be accomplished deploying human pose estimation strategies. Deep learning is shown to be the paramount approach in the realm where in collaboration with a ”light detection and ranging” depth camera the development of a markerless motion analysis software system seems to be feasible. The project utilizes a trained convolutional neural network in order to track humans doing sport activities and to provide feedback after biomechanical analysis. Implementations of four filtering methods are presented regarding movement’s nature, such as kalman filter, fixedinterval smoother, butterworth and moving average filter. The software seems to be practicable in the field evaluating videos at 30Hz, as it is demonstrated by indoor cycling and hammer throwing events. Nonstatic camera behaves quite well against a standstill and upright person while the mean absolute error is 8.32% and 6.46% referential to left and right knee angle, respectively. An impeccable system would benefit not only the sports domain but also the health industry as a whole.
Skelettspårning kan åstadkommas med hjälp av metoder för uppskattning av mänsklig pose. Djupinlärningsmetoder har visat sig vara det främsta tillvägagångssättet och om man använder en djupkamera med ljusdetektering och varierande omfång verkar det vara möjligt att utveckla ett markörlöst system för rörelseanalysmjukvara. I detta projekt används ett tränat neuralt nätverk för att spåra människor under sportaktiviteter och för att ge feedback efter biomekanisk analys. Implementeringar av fyra olika filtreringsmetoder för mänskliga rörelser presenteras, kalman filter, utjämnare med fast intervall, butterworth och glidande medelvärde. Mjukvaran verkar vara användbar vid fälttester för att utvärdera videor vid 30Hz. Detta visas genom analys av inomhuscykling och släggkastning. En ickestatisk kamera fungerar ganska bra vid mätningar av en stilla och upprättstående person. Det genomsnittliga absoluta felet är 8.32% respektive 6.46% då vänster samt höger knävinkel användes som referens. Ett felfritt system skulle gynna såväl idrottssom hälsoindustrin.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Palm, Daniel. "Eye on the Prize : Enhancing Realism during Interaction towards Non-Player-Characters with Natural Eye Movements." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3680.

Повний текст джерела
Анотація:
In the wake of motion capture and visual animation one aspect seems to be lacking. A realistic representation of the dynamic and unpredictable human visual perception. A human examines her surroundings in a unpredictable, saliency based and top-down task oriented manner. In the field of computer science, interaction design and the industry of game development, great leaps have been taken when it comes to capture motion of bodies. Motion capture helps developers and film makers to portray realistic humans in virtual environments. Where motion has come far, eyes and perception has not. As of yet a virtual representation of human visual perception, has not been mimicked as close as body motion. This thesis will examine perceived realism in virtual agents, with a focus on eye motion. In this study a virtual agent has been given eye movements of human beings and been compared to an agent based on current virtual agents in games. This is the first step towards synthesizing more than just human motion in virtual agents. It will provide future research with the data and tools needed to produce an algorithm based on the gathered data. Prior context research includes a study of current games. Two participant experiments have been be conducted, both has recorded eye positional data for analysis. The first experiment helps build the second as it compares virtual agents using a Likert scale for a subjective rating of realism. The results offers some very interesting data, indeed data that lie at the core of the study as well as data for further studies. While statistical analyses of Likert scales might be considered ambiguous this study has done so and reached a conclusion. A virtual agent enhanced with eye-motion based on human eye movements does portray a more realistic human like behaviour.
Arbetet undersöker uppfattad realism av virtuella karaktärer som använder mänskliga ögonrörelser. Med hjälp av en Tobii Eye-tracker har personers ögonrörelser spelats in medan de tittat på en virtuell karaktär. Därefter har den informationen använts till att skapa en virtuell karaktär med mänskliga ögonrörelser. En jämförelse mellan dessa har sedan gjorts för att bedöma vad som uppfattas mest realistiskt. Resultaten kan inte statistik säkerställas, även om datan ger indikationer på att det är en skillnad.
Programme: Master of Science Programme in Design, Interaction and Game Technologies/Masterprogram i design, interaktion och spelteknologier Phone nr: +46735305836
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lu, Yifan. "Inferring Human Pose and Motion from Images." Phd thesis, 2011. http://hdl.handle.net/1885/8222.

Повний текст джерела
Анотація:
As optical gesture recognition technology advances, touchless human computer interfaces of the future will soon become a reality. One particular technology, markerless motion capture, has gained a large amount of attention, with widespread application in diverse disciplines, including medical science, sports analysis, advanced user interfaces, and virtual arts. However, the complexity of human anatomy makes markerless motion capture a non-trivial problem: I) parameterised pose configuration exhibits high dimensionality, and II) there is considerable ambiguity in surjective inverse mapping from observation to pose configuration spaces with a limited number of camera views. These factors together lead to multimodality in high dimensional space, making markerless motion capture an ill-posed problem. This study challenges these difficulties by introducing a new framework. It begins with automatically modelling specific subject template models and calibrating posture at the initial stage. Subsequent tracking is accomplished by embedding naturally-inspired global optimisation into the sequential Bayesian filtering framework. Tracking is enhanced by several robust evaluation improvements. Sparsity of images is managed by compressive evaluation, further accelerating computational efficiency in high dimensional space.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Drory, Ami. "Computer Vision and Machine Learning for Biomechanics Applications : Human Detection, Pose and Shape Estimation and Tracking in Unconstrained Environment From Uncalibrated Images, Videos and Depth." Phd thesis, 2017. http://hdl.handle.net/1885/129415.

Повний текст джерела
Анотація:
Motivation. In Biomechanics, musculoskeletal models yield information that cannot be non-invasively obtained by direct measurement based on skeletal kinematics. Unsatisfactorily, obtaining accurate skeletal kinematics is limited to either user manual labelling or marker-based motion capture systems (MoCaps) that are limited by expansive infrastructure, environmental conditions, obtrusive markers causing movement impediment and occlusion errors. Moreover, they cannot yield surface geometry that is critical for many biomechanical applications. To advance the state of knowledge, real-time user-free acquisition of individualised pose and surface geometry is currently needed, and motivates our work on this thesis. Aims. The goals of this dissertation are; 1) to explore how advances in computer vision and machine learning algorithms can be levered to provide the necessary framework for in-natura acquisition of skeletal kinematics, 2) in a challenge to the traditional biomechanics modelling reliance on skeletal pose only, explore how computer vision algorithms can be used to develop shape recovery framework, 3) to demonstrate the potential of human detection, tracking, pose estimation and surface recovery techniques to address open problems in biomechanics. Contributions. We demonstrate skeletal pose estimation from monocular images in challenging environments under a discriminative pictorial structure framework. We extend the flexible part based approach to explicitly model human-object interaction. Our empirical performance results show that our proposed extension to the technique improves pose estimation. Further, we develop a hybrid framework for human detection and shape recovery using a discriminative deformable part based model for detection with a learnt shape and appearance model priors for shape recovery from monocular images. We also develop a real time framework for simultaneous activity recognition, pose estimation and shape recovery using information from a structured light sensor. For a demonstrator application, we develop a theoretical model that uses the recovered shape to solve downstream open questions in biomechanics. Finally, we develop object detection and tracking in a particularly challenging environment from image sequences that include rapid shot and view transition using complementary trained discriminative classifiers. We apply our techniques to the human ambulatory modalities of cycling and kayaking because they are common in both the clinical and sports biomechanics settings, but are rarely studied because they present unique challenges. Specifically, many applied problems relating to those modalities remain open due to absence of robust markerless motion capture that can recover skeletal kinematics and surface geometry in-natura. Impact. The developed methods can subsequently provide new insights into open applied problems, such as enhance the understanding of bluff bodies, specifically cycling, aerodynamics, and kayaking performance. More importantly, we believe that from a higher level standpoint, our full-body human shape modelling and surface recovery represents a significant paradigm shift in biomechanical modelling, which traditionally relies on skeletal pose only. The knowledge gained is intended to form the foundation for the development of evidence-based decision support tools for diagnosis and treatment through enhanced understanding of human motion. We envision that these methods will have a transformative effect on the field of biomechanics, analogously to the effect of medical imaging on the field of medicine.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Meng-ChinHsu and 許孟勤. "A Toolkit for Human Posture Comparison with Consumer Markerless Motion Capture Device." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/44206594145711875143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

"Dual bayesian and morphology-based approach for markerless human motion capture in natural interaction environments." Université catholique de Louvain, 2006. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-06272006-123215/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Human Motion Tracking, Markerless Motion Capture"

1

Mündermann, Lars, Stefano Corazza, and Thomas P. Andriacchi. "Markerless Motion Capture for Biomechanical Applications." In Human Motion, 377–98. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-6693-1_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Fua, Pascal. "Markerless 3D Human Motion Capture from Images." In Encyclopedia of Biometrics, 1–7. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-27733-7_38-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Azad, Pedram. "Stereo-Based Markerless Human Motion Capture System." In Cognitive Systems Monographs, 147–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04229-4_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fua, P. "Markerless 3D Human Motion Capture from Images." In Encyclopedia of Biometrics, 958–63. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-73003-5_38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fua, Pascal. "Markerless 3D Human Motion Capture from Images." In Encyclopedia of Biometrics, 1117–22. Boston, MA: Springer US, 2015. http://dx.doi.org/10.1007/978-1-4899-7488-4_38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

John, Vijay, Spela Ivekovic, and Emanuele Trucco. "Markerless Human Motion Capture Using Hierarchical Particle Swarm Optimisation." In Communications in Computer and Information Science, 343–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11840-1_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Krzeszowski, Tomasz, Bogdan Kwolek, Agnieszka Michalczuk, Adam Świtoński, and Henryk Josiński. "View Independent Human Gait Recognition Using Markerless 3D Human Motion Capture." In Computer Vision and Graphics, 491–500. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33564-8_59.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Colombo, Giorgio, Daniele Regazzoni, and Caterina Rizzi. "Markerless Motion Capture Integrated with Human Modeling for Virtual Ergonomics." In Digital Human Modeling and Applications in Health, Safety, Ergonomics, and Risk Management. Human Body Modeling and Ergonomics, 314–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39182-8_37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jaeggli, Tobias, Esther Koller-Meier, and Luc Van Gool. "Multi-activity Tracking in LLE Body Pose Space." In Human Motion – Understanding, Modeling, Capture and Animation, 42–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-75703-0_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rogez, Grégory, Ignasi Rius, Jesús Martínez-del-Rincón, and Carlos Orrite. "Exploiting Spatio-temporal Constraints for Robust 2D Pose Tracking." In Human Motion – Understanding, Modeling, Capture and Animation, 58–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-75703-0_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Human Motion Tracking, Markerless Motion Capture"

1

Regazzoni, Daniele, Andrea Vitali, Filippo Colombo Zefinetti, and Caterina Rizzi. "Gait Analysis in the Assessment of Patients Undergoing a Total Hip Replacement." In ASME 2019 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/imece2019-10491.

Повний текст джерела
Анотація:
Abstract Nowadays, healthcare centers are not familiar with quantitative approaches for patients’ gait evaluation. There is a clear need for methods to obtain objective figures characterizing patients’ performance. Actually, there are no diffused methods for comparing the pre- and post-operative conditions of the same patient, integrating clinical information and representing a measure of the efficiency of functional recovery, especially in the short-term distance of the surgical intervention. To this aim, human motion tracking for medical analysis is creating new frontiers for potential clinical and home applications. Motion Capture (Mocap) systems are used to allow detecting and tracking human body movements, such as gait or any other gesture or posture in a specific context. In particular, low-cost portable systems can be adopted for the tracking of patients’ movements. The pipeline going from tracking the scene to the creation of performance scores and indicators has its main challenge in the data elaboration, which depends on the specific context and to the detailed performance to be evaluated. The main objective of this research is to investigate whether the evaluation of the patient’s gait through markerless optical motion capture technology can be added to clinical evaluations scores and if it is able to provide a quantitative measure of recovery in the short postoperative period. A system has been conceived, including commercial sensors and a way to elaborate data captured according to caregivers’ requirements. This allows transforming the real gait of a patient right before and/or after the surgical procedure into a set of scores of medical relevance for his/her evaluation. The technical solution developed in this research will be the base for a large acquisition and data elaboration campaign performed in collaboration with an orthopedic team of surgeons specialized in hip arthroplasty. This will also allow assessing and comparing the short run results obtained by adopting different state-of-the-art surgical approach for the hip replacement.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Huo, Feifei, Emile Hendriks, Pavel Paclik, and A. H. J. Oomes. "Markerless human motion capture and pose recognition." In 2009 10th Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS). IEEE, 2009. http://dx.doi.org/10.1109/wiamis.2009.5031420.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li Jia, Miao Zhenjiang, and Wan Chengkai. "Markerless human body motion capture using multiple cameras." In 2008 9th International Conference on Signal Processing (ICSP 2008). IEEE, 2008. http://dx.doi.org/10.1109/icosp.2008.4697410.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

"GPU-accelerated Real-time Markerless Human Motion Capture." In International Conference on Computer Graphics Theory and Applications. SciTePress - Science and and Technology Publications, 2013. http://dx.doi.org/10.5220/0004282203970401.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liu, Haiying, and Rama Chellappa. "Markerless Monocular Tracking of Articulated Human Motion." In 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.366002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Azad, P., T. Asfour, and R. Dillmann. "Robust real-time stereo-based markerless human motion capture." In 2008 8th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2008). IEEE, 2008. http://dx.doi.org/10.1109/ichr.2008.4755975.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Oguchi, Kimio, and Keita Akimoto. "Feasible Human Recognition by Using Low-cost Markerless Motion Capture." In 2018 9th IEEE Control and System Graduate Research Colloquium (ICSGRC). IEEE, 2018. http://dx.doi.org/10.1109/icsgrc.2018.8657582.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wan, Chengkai, Baozong Yuan, and Zhenjiang Miao. "Model-Based Markerless Human Body Motion Capture using Multiple Cameras." In Multimedia and Expo, 2007 IEEE International Conference on. IEEE, 2007. http://dx.doi.org/10.1109/icme.2007.4284846.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mündermann, Lars, Stefano Corazza, Ajit M. Chaudhari, Thomas P. Andriacchi, Aravind Sundaresan, and Rama Chellappa. "Measuring human movement for biomechanical applications using markerless motion capture." In Electronic Imaging 2006, edited by Brian D. Corner, Peng Li, and Matthew Tocheri. SPIE, 2006. http://dx.doi.org/10.1117/12.650854.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dong, Yuanqiang, and Guilherme N. DeSouza. "A new hierarchical particle filtering for markerless human motion capture." In 2009 IEEE Workshop on Computational Intelligence for Visual Intelligence (CIVI). IEEE, 2009. http://dx.doi.org/10.1109/civi.2009.4938980.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії