Academic literature on the topic 'Motion-Capturing, human motion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Motion-Capturing, human motion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Motion-Capturing, human motion"

1

Cheng, Zhiqing, Anthony Ligouri, Ryan Fogle, and Timothy Webb. "Capturing Human Motion in Natural Environments." Procedia Manufacturing 3 (2015): 3828–35. http://dx.doi.org/10.1016/j.promfg.2015.07.886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hsu, Shih-Chung, Jun-Yang Huang, Wei-Chia Kao, and Chung-Lin Huang. "Human body motion parameters capturing using kinect." Machine Vision and Applications 26, no. 7-8 (September 21, 2015): 919–32. http://dx.doi.org/10.1007/s00138-015-0710-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Adelsberger, Rolf. "Capturing human motion one step at a time." XRDS: Crossroads, The ACM Magazine for Students 20, no. 2 (December 2013): 38–42. http://dx.doi.org/10.1145/2538692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Sang Il, and Jessica K. Hodgins. "Capturing and animating skin deformation in human motion." ACM Transactions on Graphics 25, no. 3 (July 2006): 881–89. http://dx.doi.org/10.1145/1141911.1141970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Abson, Karl, and Ian Palmer. "Motion capture: capturing interaction between human and animal." Visual Computer 31, no. 3 (March 22, 2014): 341–53. http://dx.doi.org/10.1007/s00371-014-0929-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lim, Chee Kian, Zhiqiang Luo, I.-Ming Chen, and Song Huat Yeo. "Wearable wireless sensing system for capturing human arm motion." Sensors and Actuators A: Physical 166, no. 1 (March 2011): 125–32. http://dx.doi.org/10.1016/j.sna.2010.10.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Dong, Zhi Qi Guo, Mei Hui Wang, and Chuan Lv. "Human Factors Assessment Based on the Technology of Human Motion Capturing." Applied Mechanics and Materials 44-47 (December 2010): 532–36. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.532.

Full text
Abstract:
We aim to combine the technology of capturing human motion and the technology of virtual reality to carry on assessment of human factors. The unique point in this method is that not only the reliable data of the maintenance worker can be gained, but also the quantitative analytic result based on the virtual environment can be obtained. In the paper, human motion capture technology, ergonomics evaluation and the interface technology have been considered comprehensively. overall technical program of human factors evaluation, which is based on human motion capturing technology, have been carried on; the technology, which include the captured data of human motion translating into the virtual environment, building the virtual human model and virtual human simulation, both based on captured data in the working site, are taken as innovations; replicable technology of the captured data in the virtual environment have been broken through. Carrying on the quantitative analysis of worker working postures, fatigue and human force and torque in the maintenance process, which is based on the technology of human factors evaluation by using the captured data in the working site, is researched. We have verified the feasibility of this technology through an example. The method provides a new way and operational technology for human factors assessment in maintenance process of aviation equipment.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Song Shan, and Yan Qing Qi. "A Virtual Human's Driving Method Based on Motion Capture Data." Advanced Materials Research 711 (June 2013): 500–505. http://dx.doi.org/10.4028/www.scientific.net/amr.711.500.

Full text
Abstract:
This article gives a method about driving a virtual human by motion capture data in software Jack. Firstly simplify Jack's skeleton model according to the skeleton of capturing data BVH. Secondly set up an Euler angle rotation equation to mapping joint angles between BVH and Jack. Finally, program the method and give an example to show that it is available to improve Jacks human motion simulating by the human capturing data.
APA, Harvard, Vancouver, ISO, and other styles
9

Su, Hai Long, and Da Wei Zhang. "Study on Error Compensation of Human Motion Analysis System." Applied Mechanics and Materials 48-49 (February 2011): 1149–53. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.1149.

Full text
Abstract:
Human motion capture system based on a new kind of error compensation technology was developed and used this to assess human motion. On the basis of three-dimensional reconstruction and some essential factors influencing on the measurement accuracy, the measurement error theory and the modified and improved human motion analysis system was established. The experimental data indicate that the measurement precision of the modified and improved system is more precise than the original system on human motion measurement and capturing.
APA, Harvard, Vancouver, ISO, and other styles
10

Alexiadis, Dimitrios S., Anargyros Chatzitofis, Nikolaos Zioulis, Olga Zoidi, Georgios Louizis, Dimitrios Zarpalas, and Petros Daras. "An Integrated Platform for Live 3D Human Reconstruction and Motion Capturing." IEEE Transactions on Circuits and Systems for Video Technology 27, no. 4 (April 2017): 798–813. http://dx.doi.org/10.1109/tcsvt.2016.2576922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Motion-Capturing, human motion"

1

Hermsdorf, Heike, and Norman Hofmann. "Erfassung, Simulation und Weiterverarbeitung menschlicher Bewegungen mit Dynamicus." Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-225982.

Full text
Abstract:
Der Einsatz digitaler Menschmodelle in der Produkt- und Prozessergonomie hat in den letzten Jahren beständig zugenommen. Vor allem Anforderungen aus dem industriellen Umfeld setzten hohe Maßstäbe an Schnelligkeit, Genauigkeit und Verlässlichkeit der verwendeten Systeme, Methoden und Verfahren. Das biomechanische Menschmodell Dynamicus ist eine am Institut für Mechatronik e.V., Chemnitz entwickelte Software, die sich auf dieses Gebiet der Simulation spezialisiert hat. Die Grundlage von Dynamicus-Simulationen sind reale menschliche Bewegungen, die mit Hilfe eines Motion-Capture-Systems aufgezeichnet werden. Die Analyse der digital vorliegenden Bewegungen erfolgt in den Wissenschaftsgebieten der Ergonomie, des Sports und der Rehabilitation.
APA, Harvard, Vancouver, ISO, and other styles
2

Yuan, Qiantailang. "The Performance of the Depth Camera in Capturing Human Body Motion for Biomechanical Analysis." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235944.

Full text
Abstract:
Three-dimensional human movement tracking has long been an important topic in medical and engineering field. Complex camera systems such as Vicon can be used to retrieve very precise motion data. However, the system is more commercial-oriented with a high cost. Besides, it would also be tedious and cumbersome to wear the special markers and suits for tracking. Therefore, there's an urgent need to investigate a cost-effective and markless tool for motion tracking. Microsoft Kinect provides a promising solution with a vast variety of libraries, allowing quick development of 3-D spatial modeling and analysis such as moving skeleton possible. For example, the kinematics of the joints such as acceleration, velocity, and angle changes can be deduced from the spatial position information acquired by the camera. In order to validate whether the Kinect system is sufficient for the analysis in practice, a micro-controller platform Arduino along with Intel® Curie™ IMU (Inertial Measurement Unit) module is developed. In particular, the velocity and Euler angels of joint movements, as well as head orientations are measured and compared between the two systems. In this paper, the goal is to present (i) the use of Kinect Depth sensor for data acquisition, (ii) post-processing with the retrieved data, (iii) validation of the Kinect camera. Results show that the RMS error of the velocity tracking ranges from 1.78% to 23.34%, presenting a good agreement of measurement between the two systems. Moreover, the relative error of the angle tracking is between 4.0% and 24.3%. The results of the head orientations tracking are hard to perform a mathematical analysis due to the noise and invalid data from the camera caused by the loss of tracking. Overall, the accuracy of joint movement tracked by the Kinect camera, particularly velocity, is proved to be acceptable and the depth camera has been found to be an effective tool for kinematic measurement as a cost-effective option. A platform and workflow are now established, thus making future work regarding validation and application possible when the advanced hardware is available.
Tre dimensionell rörelse spårning har alltid varit ett viktigt ämne inom medicinska och tekniska områden. Komplexa kamerasystem så som Vicon kan användas för att hämta exakta data för olika rörelser. Dessa system är dock mer kommersiellt orienterade, och är oftast dyra. Systemen är dessutom besvärliga eftersom man är tvungen att bära speciella dräkter med markörer, för att kunna spåra rörelser. Därav finns det ett stort intresse av att undersöka ett kostnadseffektivt och markörfria verktyg för rörelsespårning. Microsoft Kinect är en lovande lösning med en mängd olika bibliotek som möjliggör en snabb utveckling av 3D spatial modellering och analys. Från den spatiala positionsinformationen kan man få fram information om ledernas acceleration, hastighet och vinkelförändring. För att kunna validera om Kinect är passande för analysen, utvecklades en mikro-styrplattform Ardunino tillsammans med Intel R CurieTM IMU (tröghetsmätningsenhet). Hastigheten och Eulers vinkel vid rörelse av lederna, samt orienteringen av huvudet mättes och jämfördes mellan dessa två system. Målet med detta arbete är att presentera (i) användningen av Kinect Depth sensor för datainsamling, (ii) efterbehandling av inhämtad data, (iii) validering av Kinect Kamera. Resultatet visade att RMS-errorn av hastighetsspårningen varierade mellan 1.78% och 23.34%, vilket påvisar en god likhet mellan mätningarna av de två systemen. Det relativa felet i vinkelspårningen är mellan 4.0% och 24.3%. Resultatet för orienteringen av huvudet var svår att ta fram genom matematisk analys eftersom brus och invalid data från kameran uppstod pga förlust av spårning. Noggrannheten av ledrörelsen detekterad av Kinect kameran bevisas vara acceptabel, speciellt för hastighetsmätningar. Djupkameran har visat vara ett effektivt verktyg för kinematiks mätning som ett kostnadseffektivt alternativ. En plattform och arbetsflöde har tagits fram, vilket möjliggör validering och tillämpning när den avancerade hårdvaran är tillgänglig.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Yuchen. "Human skill capturing and modelling using wearable devices." Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/27613.

Full text
Abstract:
Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution.
APA, Harvard, Vancouver, ISO, and other styles
4

Schönherr, Ricardo. "Simulationsbasierte Absicherung der Ergonomie mit Hilfe digital beschriebener menschlicher Bewegungen." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-132991.

Full text
Abstract:
Digitale Menschmodelle gelten als gut geeignete Werkzeuge zur Planung und Bewertung manueller Arbeit, insbesondere zur präventiven Ergonomieabsicherung. Bislang ist jedoch der Funktionsumfang der Softwaretools unzureichend und der Bedienaufwand zu hoch. In der vorliegenden Arbeit wird daher die Entwicklung einer Methode zur Aufbereitung digital erfasster menschlicher Bewegungen für die Bewegungsgenerierung und semiautomatische Ergonomiebewertung beschrieben. Nach einer Diskussion des gegenwärtigen Wissensstandes folgt die Darstellung zweier empirischer Untersuchungen zur Bewegungserfassung und -auswertung. Anschließend wird die Implementierung eines mehrere Belastungsfaktoren umfassenden Kombinationsverfahrens zur Ergonomiebewertung erläutert. Die gewonnenen Erkenntnisse zur Bewegungsgenerierung und Ergonomiebewertung fließen in die Entwicklung einer Software zur Planung und Bewertung manueller Arbeit, den Editor menschlicher Arbeit (ema), ein. Schließlich erfolgt eine Evaluation anhand realer Planungsszenarien
Digital Human Models are said to be appropriate tools for planning and assessing human work; especially for preventive ergonomics risk assessment. Up to now, the functional range of established software tools is not sufficient and the operation is too complex and time consuming. Thus, in the present doctoral thesis, the development of a method for the preparation of digitally captured human motions for an algorithmic motion generation and automatic ergonomic risk assessment is described. On the summarization of the relevant current level of knowledge follows the description of two empirical studies for motion capturing and analysis. Afterwards the implementation of a method of the ergonomics risk assessment that combines several biomechanical risk factors is explained. The results of the preliminary studies flow into the development of a new software tool for planning and assessing human work – the editor for manual work activities (ema). Finally, the practical evaluation with the help of real planning scenarios is presented
APA, Harvard, Vancouver, ISO, and other styles
5

Hsu, Chien-Wei, and 徐健維. "Real-time Human Body Motion Capturing." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/50888442126792225299.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
103
In this thesis, we propose a real-time human full-body motion capturing system using the depth image from Kinect. Our system consists of three main steps to estimate human pose. First, we extract the characteristic landmarks on human body. By using pixel-based body part classifier, we segment the human silhouette into different body part regions. Then, we remove the outliers and extract the characteristic landmarks in the centers of body part regions. Second, we transform the landmarks to the feature vector with 3D position information. We apply the K-d tree to construct example-based system which will search several possible pose candidates. Third, we apply the voting to choose the best matching pose from candidates as the estimated pose. In experimental results, we prove that our system can operate in real-time and achieve sufficiently accuracy.
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Wing-Pang, and 林文榜. "Human Object Walking Motion Parameters Capturing." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/92933057724544758504.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
98
Markerless human body part tracking and pose estimation have recently attracted intensive attention because of their wide applications. The vision-based approaches to solve the problems of motion parameters capturing always meet two challenges. 1) how to solve the parameter estimation problem in high-dimensional space, and 2) how to deal with the missing observation information due to occlusion. To solve the two problems, we proposed a vision-based method combining the Annealed Particle Filter (APF) [6] with a pre-trained correlation map and temporal constraint. This paper proposes a system for capturing motion parameters of walking human object in indoors and outdoors. To solve the problem with shadow when we track the walking people in outdoors, we use the HSV model to remove the shadow. Compare to the traditional APF [6], our method needs less operation time and has more accurate result. Because of the pre-trained correlation map and temporal constraint, our method also has better performance than the tradition APF when self-occlusion occurs.
APA, Harvard, Vancouver, ISO, and other styles
7

Abson, Karl, and Ian J. Palmer. "Motion capture: capturing interaction between human and animal." 2015. http://hdl.handle.net/10454/9106.

Full text
Abstract:
No
We introduce a new "marker-based" model for use in capturing equine movement. This model is informed by a sound biomechanical study of the animal and can be deployed in the pursuit of many undertakings. Unlike many other approaches, our method provides a high level of automation and hides the intricate biomechanical knowledge required to produce realistic results. Due to this approach, it is possible to acquire solved data with minimal manual intervention even in real-time conditions. The approach introduced can be replicated for the production of many other animals. The model is first informed by the veterinary world through studies of the subject's anatomy. Second, further medical studies aimed at understanding and addressing surface processes, inform model creation. The latter studies address items such as skin sliding. If not otherwise corrected these processes may hinder marker based capture. The resultant model has been tested in feasibility studies for practicality and subject acceptance during production. Data is provided for scrutiny along with the subject digitally captured through a variety of methods. The digital subject in mesh form as well as the motion capture model aid in comparison and show the level of accurateness achieved. The video reference and digital renders provide an insight into the level of realism achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Pao-Fa, and 黃保發. "A Human Motion Capturing Using Nintendo Wii Remote Controllers." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/9jwght.

Full text
Abstract:
碩士
國立臺北科技大學
電腦與通訊研究所
99
The third-generation consoles of Nintendo using the Wii remote controller (Wiimote) to achieve the effect of human-computer interaction, this remote control contains one high-performance infrared camera in the front end manufactured by PixArt Imaging. This camera can track dynamic object up to four infrared light source coordinates in the space. It can detect spatial orientation with infrared sensor bar (IR sensor bar). To detect spatial orientation at least two points for a camera, you need two cameras for a point. Infrared light-emitting diode with a directional. To reduce the detection of dead zone, we place twenty-nine IR LED on each sensing node, and the body placed eight sensing nodes on the limbs to obtain the main trajectory. To prevent all nodes turn on IR LED at the same time or the system can not use triangulation to determine the spatial location, we have to add ZigBee module on the sensing node to control the IR LED working status and use round robin manner sequentially to scan each sensing node. This paper uses several Wiimotes and active infrared transmitter nodes with ZigBee to obtain an acceptable low-cost hardware system and space positioning accuracy. Software implementation using the normalized eight algorithm in stereo vision to obtain the fundamental matrix, then use Ogre the open source 3D graphics engine to plot the position of coordinates.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Chung-Yang, and 黃俊揚. "Real-time Human Upper Body Posture Recognition and Upper Limbs Motion Capturing." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/08111304755504581146.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
101
We propose a real-time system to recognize human upper body posture and predict positions of upper limbs joints. The working environment of the system is for the user sitting in front of the computer, and the camera is set up at the top of the screen. We consider users interact with the system in front of the computer, the camera is only available to get the human upper body images. The system input is a real-time depth image, captured by using Microsoft depth camera Kinect. The system has two outputs: The user's action (Normal sitting position, raised his left hand, raised his right hand, raised his hands) and estimated location of body parts (Face, shoulders, arms, elbows, palms, body, etc). FPS of the whole system between about 14 to 18, varies due to the total required processing pixels. System architecture can be divided into two phases: In the first stage, Depth images after the pre-processing and feature extraction (Depth Context), are analyzed by the action classifier (Random Forest [16]) to Identify the current user action type. Then, the time dependency is applied to correct action type. In the second stage, according to the user action type we select an appropriate body classifier (Pixel based Random Forest), to classify pre-processing depth image and identify the distribution of body parts. Later, considering the time dependency of each body part, we correct the overlapping body part, and determine the estimated positions of body parts.
APA, Harvard, Vancouver, ISO, and other styles
10

Kao, Wei-Chia, and 高唯家. "Real-time Human Upper Body Action Recognition and Body Part Motion Capturing." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/48905552485183414140.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
102
This thesis proposes a real-time system to recognize human upper body posture and predict positions of upper limbs joints using the depth image captured by using Kinect. The system consists of three stages: (1) action recognition, (2) body part segmentation, (3) offset compensation. In the 1st stage, the depth images after the pre-processing and feature extraction are analyzed by the action type classifier to identify the current user action type. Then, the temporal correlation between the recognized action types can be applied for action type correction. In the 2nd stage, based on the user action type, we select an appropriate body classifier to classify pre-processing depth image and identify the distribution of body part. We also consider the time dependency and correlation of each body part to solve the occlusion problem of body part. In the 3rd stage, we develop the offset classifiers based on the difference between the output of the 2nd stage and the ground truth. For different user action type, we select an appropriate offset classifier to find the offset and compensate the output of the body classifier. Based on the results of before and after offset compensation, and depth silhouette, we can determine this information to identify which result is better as the final output of body location.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Motion-Capturing, human motion"

1

Magnenat-Thalmann, Nadia. Modelling the Physiological Human: 3D Physiological Human Workshop, 3DPH 2009, Zermatt, Switzerland, November 29 – December 2, 2009. Proceedings. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Putrino, David, and Brandon Larson. Capturing Motion: Studying Human Movement in the Digital Age. Elsevier Science & Technology Books, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Putrino, David, and Brandon Larson. Capturing Motion: Studying Human Movement in the Digital Age. Elsevier Science & Technology, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yamane, Katsu. Simulating and Generating Motions of Human Figures. Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Simulating and Generating Motions of Human Figures. Springer, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Motion-Capturing, human motion"

1

Brox, Thomas, Bodo Rosenhahn, and Daniel Cremers. "Contours, Optic Flow, and Prior Knowledge: Cues for Capturing 3D Human Motion in Videos." In Human Motion, 265–93. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-6693-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Xu, and Yuncai Liu. "Capturing 3D Human Motion from Monocular Images Using Orthogonal Locality Preserving Projection." In Digital Human Modeling, 304–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73321-8_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bleser, Gabriele, Bertram Taetz, and Paul Lukowicz. "Human Motion Capturing and Activity Recognition Using Wearable Sensor Networks." In Biosystems & Biorobotics, 191–206. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01836-8_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schiefer, Christoph, Thomas Kraus, Elke Ochsmann, Ingo Hermanns, and Rolf Ellegast. "3D Human Motion Capturing Based Only on Acceleration and Angular Rate Measurement for Low Extremities." In Digital Human Modeling, 195–203. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21799-9_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Theobalt, Christian, Marcus Magnor, and Hans-Peter Seidel. "Video-based Capturing and Rendering of People." In Human Motion, 531–59. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-6693-1_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bönig, Jochen, Christian Fischer, Matthias Brossog, Martin Bittner, Markus Fuchs, Holger Weckend, and Jörg Franke. "Virtual Validation of the Manual Assembly of a Power Electronic Unit via Motion Capturing Connected with a Simulation Tool Using a Human Model." In Lecture Notes in Production Engineering, 463–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-30817-8_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Motion-Capturing, human motion"

1

Bau-Cheng Shen, Huang-Chia Shih, and Chung-Lin Huang. "Real-time human motion capturing system." In rnational Conference on Image Processing. IEEE, 2005. http://dx.doi.org/10.1109/icip.2005.1530307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kao, Wei-Chia, Shih-Chung Hsu, and Chung-Lin Huang. "Human upper-body motion capturing using Kinect." In 2014 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2014. http://dx.doi.org/10.1109/icalip.2014.7009794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Damian, Ionut, Mohammad Obaid, Felix Kistler, and Elisabeth André. "Augmented reality using a 3D motion capturing suit." In the 4th Augmented Human International Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2459236.2459277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Blanke, Ulf, and Bernt Schiele. "Towards human motion capturing using gyroscopeless orientation estimation." In 2010 International Symposium on Wearable Computers (ISWC). IEEE, 2010. http://dx.doi.org/10.1109/iswc.2010.5665856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Chung-Lin, Bau-Cheng Shen, and Huang-Chia Shih. "A real-time vision-based human motion capturing." In Visual Communications and Image Processing 2005. SPIE, 2005. http://dx.doi.org/10.1117/12.631590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Park, Sang Il, and Jessica K. Hodgins. "Capturing and animating skin deformation in human motion." In ACM SIGGRAPH 2006 Papers. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1179352.1141970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tseng, Yu-Chee, Chin-Hao Wu, Fang-Jing Wu, Chi-Fu Huang, Chung-Ta King, Chun-Yu Lin, Jang-Ping Sheu, et al. "A Wireless Human Motion Capturing System for Home Rehabilitation." In 2009 Tenth International Conference on Mobile Data Management: Systems, Services and Middleware. IEEE, 2009. http://dx.doi.org/10.1109/mdm.2009.51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bönig, Jochen, Jerome Perret, Christian Fischer, Holger Weckend, Florian Döbereiner, and Jörg Franke. "Creating realistic human model motion by hybrid motion capturing interfaced with the digital environment." In International FAIM Conference. DEStech Publications, Inc., 2014. http://dx.doi.org/10.14809/faim.2014.0317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Woolley, Charles, D. B. Chaffin, Ulrich Raschke, and Xudong Zhang. "Integration of Electromagnetic and Optical Motion Tracking Devices for Capturing Human Motion Data Woojin Park." In Digital Human Modeling For Design And Engineering Conference And Exposition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 1999. http://dx.doi.org/10.4271/1999-01-1911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ying Wu and T. S. Huang. "Capturing articulated human hand motion: a divide-and-conquer approach." In Proceedings of the Seventh IEEE International Conference on Computer Vision. IEEE, 1999. http://dx.doi.org/10.1109/iccv.1999.791280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography