Academic literature on the topic 'Motion-Capturing, human motion'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Motion-Capturing, human motion.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Motion-Capturing, human motion"
Cheng, Zhiqing, Anthony Ligouri, Ryan Fogle, and Timothy Webb. "Capturing Human Motion in Natural Environments." Procedia Manufacturing 3 (2015): 3828–35. http://dx.doi.org/10.1016/j.promfg.2015.07.886.
Full textHsu, Shih-Chung, Jun-Yang Huang, Wei-Chia Kao, and Chung-Lin Huang. "Human body motion parameters capturing using kinect." Machine Vision and Applications 26, no. 7-8 (September 21, 2015): 919–32. http://dx.doi.org/10.1007/s00138-015-0710-1.
Full textAdelsberger, Rolf. "Capturing human motion one step at a time." XRDS: Crossroads, The ACM Magazine for Students 20, no. 2 (December 2013): 38–42. http://dx.doi.org/10.1145/2538692.
Full textPark, Sang Il, and Jessica K. Hodgins. "Capturing and animating skin deformation in human motion." ACM Transactions on Graphics 25, no. 3 (July 2006): 881–89. http://dx.doi.org/10.1145/1141911.1141970.
Full textAbson, Karl, and Ian Palmer. "Motion capture: capturing interaction between human and animal." Visual Computer 31, no. 3 (March 22, 2014): 341–53. http://dx.doi.org/10.1007/s00371-014-0929-2.
Full textLim, Chee Kian, Zhiqiang Luo, I.-Ming Chen, and Song Huat Yeo. "Wearable wireless sensing system for capturing human arm motion." Sensors and Actuators A: Physical 166, no. 1 (March 2011): 125–32. http://dx.doi.org/10.1016/j.sna.2010.10.015.
Full textZhou, Dong, Zhi Qi Guo, Mei Hui Wang, and Chuan Lv. "Human Factors Assessment Based on the Technology of Human Motion Capturing." Applied Mechanics and Materials 44-47 (December 2010): 532–36. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.532.
Full textWang, Song Shan, and Yan Qing Qi. "A Virtual Human's Driving Method Based on Motion Capture Data." Advanced Materials Research 711 (June 2013): 500–505. http://dx.doi.org/10.4028/www.scientific.net/amr.711.500.
Full textSu, Hai Long, and Da Wei Zhang. "Study on Error Compensation of Human Motion Analysis System." Applied Mechanics and Materials 48-49 (February 2011): 1149–53. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.1149.
Full textAlexiadis, Dimitrios S., Anargyros Chatzitofis, Nikolaos Zioulis, Olga Zoidi, Georgios Louizis, Dimitrios Zarpalas, and Petros Daras. "An Integrated Platform for Live 3D Human Reconstruction and Motion Capturing." IEEE Transactions on Circuits and Systems for Video Technology 27, no. 4 (April 2017): 798–813. http://dx.doi.org/10.1109/tcsvt.2016.2576922.
Full textDissertations / Theses on the topic "Motion-Capturing, human motion"
Hermsdorf, Heike, and Norman Hofmann. "Erfassung, Simulation und Weiterverarbeitung menschlicher Bewegungen mit Dynamicus." Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-225982.
Full textYuan, Qiantailang. "The Performance of the Depth Camera in Capturing Human Body Motion for Biomechanical Analysis." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235944.
Full textTre dimensionell rörelse spårning har alltid varit ett viktigt ämne inom medicinska och tekniska områden. Komplexa kamerasystem så som Vicon kan användas för att hämta exakta data för olika rörelser. Dessa system är dock mer kommersiellt orienterade, och är oftast dyra. Systemen är dessutom besvärliga eftersom man är tvungen att bära speciella dräkter med markörer, för att kunna spåra rörelser. Därav finns det ett stort intresse av att undersöka ett kostnadseffektivt och markörfria verktyg för rörelsespårning. Microsoft Kinect är en lovande lösning med en mängd olika bibliotek som möjliggör en snabb utveckling av 3D spatial modellering och analys. Från den spatiala positionsinformationen kan man få fram information om ledernas acceleration, hastighet och vinkelförändring. För att kunna validera om Kinect är passande för analysen, utvecklades en mikro-styrplattform Ardunino tillsammans med Intel R CurieTM IMU (tröghetsmätningsenhet). Hastigheten och Eulers vinkel vid rörelse av lederna, samt orienteringen av huvudet mättes och jämfördes mellan dessa två system. Målet med detta arbete är att presentera (i) användningen av Kinect Depth sensor för datainsamling, (ii) efterbehandling av inhämtad data, (iii) validering av Kinect Kamera. Resultatet visade att RMS-errorn av hastighetsspårningen varierade mellan 1.78% och 23.34%, vilket påvisar en god likhet mellan mätningarna av de två systemen. Det relativa felet i vinkelspårningen är mellan 4.0% och 24.3%. Resultatet för orienteringen av huvudet var svår att ta fram genom matematisk analys eftersom brus och invalid data från kameran uppstod pga förlust av spårning. Noggrannheten av ledrörelsen detekterad av Kinect kameran bevisas vara acceptabel, speciellt för hastighetsmätningar. Djupkameran har visat vara ett effektivt verktyg för kinematiks mätning som ett kostnadseffektivt alternativ. En plattform och arbetsflöde har tagits fram, vilket möjliggör validering och tillämpning när den avancerade hårdvaran är tillgänglig.
Zhao, Yuchen. "Human skill capturing and modelling using wearable devices." Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/27613.
Full textSchönherr, Ricardo. "Simulationsbasierte Absicherung der Ergonomie mit Hilfe digital beschriebener menschlicher Bewegungen." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-132991.
Full textDigital Human Models are said to be appropriate tools for planning and assessing human work; especially for preventive ergonomics risk assessment. Up to now, the functional range of established software tools is not sufficient and the operation is too complex and time consuming. Thus, in the present doctoral thesis, the development of a method for the preparation of digitally captured human motions for an algorithmic motion generation and automatic ergonomic risk assessment is described. On the summarization of the relevant current level of knowledge follows the description of two empirical studies for motion capturing and analysis. Afterwards the implementation of a method of the ergonomics risk assessment that combines several biomechanical risk factors is explained. The results of the preliminary studies flow into the development of a new software tool for planning and assessing human work – the editor for manual work activities (ema). Finally, the practical evaluation with the help of real planning scenarios is presented
Hsu, Chien-Wei, and 徐健維. "Real-time Human Body Motion Capturing." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/50888442126792225299.
Full text國立清華大學
電機工程學系
103
In this thesis, we propose a real-time human full-body motion capturing system using the depth image from Kinect. Our system consists of three main steps to estimate human pose. First, we extract the characteristic landmarks on human body. By using pixel-based body part classifier, we segment the human silhouette into different body part regions. Then, we remove the outliers and extract the characteristic landmarks in the centers of body part regions. Second, we transform the landmarks to the feature vector with 3D position information. We apply the K-d tree to construct example-based system which will search several possible pose candidates. Third, we apply the voting to choose the best matching pose from candidates as the estimated pose. In experimental results, we prove that our system can operate in real-time and achieve sufficiently accuracy.
Lin, Wing-Pang, and 林文榜. "Human Object Walking Motion Parameters Capturing." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/92933057724544758504.
Full text國立清華大學
電機工程學系
98
Markerless human body part tracking and pose estimation have recently attracted intensive attention because of their wide applications. The vision-based approaches to solve the problems of motion parameters capturing always meet two challenges. 1) how to solve the parameter estimation problem in high-dimensional space, and 2) how to deal with the missing observation information due to occlusion. To solve the two problems, we proposed a vision-based method combining the Annealed Particle Filter (APF) [6] with a pre-trained correlation map and temporal constraint. This paper proposes a system for capturing motion parameters of walking human object in indoors and outdoors. To solve the problem with shadow when we track the walking people in outdoors, we use the HSV model to remove the shadow. Compare to the traditional APF [6], our method needs less operation time and has more accurate result. Because of the pre-trained correlation map and temporal constraint, our method also has better performance than the tradition APF when self-occlusion occurs.
Abson, Karl, and Ian J. Palmer. "Motion capture: capturing interaction between human and animal." 2015. http://hdl.handle.net/10454/9106.
Full textWe introduce a new "marker-based" model for use in capturing equine movement. This model is informed by a sound biomechanical study of the animal and can be deployed in the pursuit of many undertakings. Unlike many other approaches, our method provides a high level of automation and hides the intricate biomechanical knowledge required to produce realistic results. Due to this approach, it is possible to acquire solved data with minimal manual intervention even in real-time conditions. The approach introduced can be replicated for the production of many other animals. The model is first informed by the veterinary world through studies of the subject's anatomy. Second, further medical studies aimed at understanding and addressing surface processes, inform model creation. The latter studies address items such as skin sliding. If not otherwise corrected these processes may hinder marker based capture. The resultant model has been tested in feasibility studies for practicality and subject acceptance during production. Data is provided for scrutiny along with the subject digitally captured through a variety of methods. The digital subject in mesh form as well as the motion capture model aid in comparison and show the level of accurateness achieved. The video reference and digital renders provide an insight into the level of realism achieved.
Huang, Pao-Fa, and 黃保發. "A Human Motion Capturing Using Nintendo Wii Remote Controllers." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/9jwght.
Full text國立臺北科技大學
電腦與通訊研究所
99
The third-generation consoles of Nintendo using the Wii remote controller (Wiimote) to achieve the effect of human-computer interaction, this remote control contains one high-performance infrared camera in the front end manufactured by PixArt Imaging. This camera can track dynamic object up to four infrared light source coordinates in the space. It can detect spatial orientation with infrared sensor bar (IR sensor bar). To detect spatial orientation at least two points for a camera, you need two cameras for a point. Infrared light-emitting diode with a directional. To reduce the detection of dead zone, we place twenty-nine IR LED on each sensing node, and the body placed eight sensing nodes on the limbs to obtain the main trajectory. To prevent all nodes turn on IR LED at the same time or the system can not use triangulation to determine the spatial location, we have to add ZigBee module on the sensing node to control the IR LED working status and use round robin manner sequentially to scan each sensing node. This paper uses several Wiimotes and active infrared transmitter nodes with ZigBee to obtain an acceptable low-cost hardware system and space positioning accuracy. Software implementation using the normalized eight algorithm in stereo vision to obtain the fundamental matrix, then use Ogre the open source 3D graphics engine to plot the position of coordinates.
Huang, Chung-Yang, and 黃俊揚. "Real-time Human Upper Body Posture Recognition and Upper Limbs Motion Capturing." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/08111304755504581146.
Full text國立清華大學
電機工程學系
101
We propose a real-time system to recognize human upper body posture and predict positions of upper limbs joints. The working environment of the system is for the user sitting in front of the computer, and the camera is set up at the top of the screen. We consider users interact with the system in front of the computer, the camera is only available to get the human upper body images. The system input is a real-time depth image, captured by using Microsoft depth camera Kinect. The system has two outputs: The user's action (Normal sitting position, raised his left hand, raised his right hand, raised his hands) and estimated location of body parts (Face, shoulders, arms, elbows, palms, body, etc). FPS of the whole system between about 14 to 18, varies due to the total required processing pixels. System architecture can be divided into two phases: In the first stage, Depth images after the pre-processing and feature extraction (Depth Context), are analyzed by the action classifier (Random Forest [16]) to Identify the current user action type. Then, the time dependency is applied to correct action type. In the second stage, according to the user action type we select an appropriate body classifier (Pixel based Random Forest), to classify pre-processing depth image and identify the distribution of body parts. Later, considering the time dependency of each body part, we correct the overlapping body part, and determine the estimated positions of body parts.
Kao, Wei-Chia, and 高唯家. "Real-time Human Upper Body Action Recognition and Body Part Motion Capturing." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/48905552485183414140.
Full text國立清華大學
電機工程學系
102
This thesis proposes a real-time system to recognize human upper body posture and predict positions of upper limbs joints using the depth image captured by using Kinect. The system consists of three stages: (1) action recognition, (2) body part segmentation, (3) offset compensation. In the 1st stage, the depth images after the pre-processing and feature extraction are analyzed by the action type classifier to identify the current user action type. Then, the temporal correlation between the recognized action types can be applied for action type correction. In the 2nd stage, based on the user action type, we select an appropriate body classifier to classify pre-processing depth image and identify the distribution of body part. We also consider the time dependency and correlation of each body part to solve the occlusion problem of body part. In the 3rd stage, we develop the offset classifiers based on the difference between the output of the 2nd stage and the ground truth. For different user action type, we select an appropriate offset classifier to find the offset and compensate the output of the body classifier. Based on the results of before and after offset compensation, and depth silhouette, we can determine this information to identify which result is better as the final output of body location.
Books on the topic "Motion-Capturing, human motion"
Magnenat-Thalmann, Nadia. Modelling the Physiological Human: 3D Physiological Human Workshop, 3DPH 2009, Zermatt, Switzerland, November 29 – December 2, 2009. Proceedings. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2009.
Find full textPutrino, David, and Brandon Larson. Capturing Motion: Studying Human Movement in the Digital Age. Elsevier Science & Technology Books, 2021.
Find full textPutrino, David, and Brandon Larson. Capturing Motion: Studying Human Movement in the Digital Age. Elsevier Science & Technology, 2020.
Find full textBook chapters on the topic "Motion-Capturing, human motion"
Brox, Thomas, Bodo Rosenhahn, and Daniel Cremers. "Contours, Optic Flow, and Prior Knowledge: Cues for Capturing 3D Human Motion in Videos." In Human Motion, 265–93. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-6693-1_11.
Full textZhao, Xu, and Yuncai Liu. "Capturing 3D Human Motion from Monocular Images Using Orthogonal Locality Preserving Projection." In Digital Human Modeling, 304–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73321-8_36.
Full textBleser, Gabriele, Bertram Taetz, and Paul Lukowicz. "Human Motion Capturing and Activity Recognition Using Wearable Sensor Networks." In Biosystems & Biorobotics, 191–206. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01836-8_19.
Full textSchiefer, Christoph, Thomas Kraus, Elke Ochsmann, Ingo Hermanns, and Rolf Ellegast. "3D Human Motion Capturing Based Only on Acceleration and Angular Rate Measurement for Low Extremities." In Digital Human Modeling, 195–203. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21799-9_22.
Full textTheobalt, Christian, Marcus Magnor, and Hans-Peter Seidel. "Video-based Capturing and Rendering of People." In Human Motion, 531–59. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-6693-1_22.
Full textBönig, Jochen, Christian Fischer, Matthias Brossog, Martin Bittner, Markus Fuchs, Holger Weckend, and Jörg Franke. "Virtual Validation of the Manual Assembly of a Power Electronic Unit via Motion Capturing Connected with a Simulation Tool Using a Human Model." In Lecture Notes in Production Engineering, 463–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-30817-8_45.
Full textConference papers on the topic "Motion-Capturing, human motion"
Bau-Cheng Shen, Huang-Chia Shih, and Chung-Lin Huang. "Real-time human motion capturing system." In rnational Conference on Image Processing. IEEE, 2005. http://dx.doi.org/10.1109/icip.2005.1530307.
Full textKao, Wei-Chia, Shih-Chung Hsu, and Chung-Lin Huang. "Human upper-body motion capturing using Kinect." In 2014 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2014. http://dx.doi.org/10.1109/icalip.2014.7009794.
Full textDamian, Ionut, Mohammad Obaid, Felix Kistler, and Elisabeth André. "Augmented reality using a 3D motion capturing suit." In the 4th Augmented Human International Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2459236.2459277.
Full textBlanke, Ulf, and Bernt Schiele. "Towards human motion capturing using gyroscopeless orientation estimation." In 2010 International Symposium on Wearable Computers (ISWC). IEEE, 2010. http://dx.doi.org/10.1109/iswc.2010.5665856.
Full textHuang, Chung-Lin, Bau-Cheng Shen, and Huang-Chia Shih. "A real-time vision-based human motion capturing." In Visual Communications and Image Processing 2005. SPIE, 2005. http://dx.doi.org/10.1117/12.631590.
Full textPark, Sang Il, and Jessica K. Hodgins. "Capturing and animating skin deformation in human motion." In ACM SIGGRAPH 2006 Papers. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1179352.1141970.
Full textTseng, Yu-Chee, Chin-Hao Wu, Fang-Jing Wu, Chi-Fu Huang, Chung-Ta King, Chun-Yu Lin, Jang-Ping Sheu, et al. "A Wireless Human Motion Capturing System for Home Rehabilitation." In 2009 Tenth International Conference on Mobile Data Management: Systems, Services and Middleware. IEEE, 2009. http://dx.doi.org/10.1109/mdm.2009.51.
Full textBönig, Jochen, Jerome Perret, Christian Fischer, Holger Weckend, Florian Döbereiner, and Jörg Franke. "Creating realistic human model motion by hybrid motion capturing interfaced with the digital environment." In International FAIM Conference. DEStech Publications, Inc., 2014. http://dx.doi.org/10.14809/faim.2014.0317.
Full textWoolley, Charles, D. B. Chaffin, Ulrich Raschke, and Xudong Zhang. "Integration of Electromagnetic and Optical Motion Tracking Devices for Capturing Human Motion Data Woojin Park." In Digital Human Modeling For Design And Engineering Conference And Exposition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 1999. http://dx.doi.org/10.4271/1999-01-1911.
Full textYing Wu and T. S. Huang. "Capturing articulated human hand motion: a divide-and-conquer approach." In Proceedings of the Seventh IEEE International Conference on Computer Vision. IEEE, 1999. http://dx.doi.org/10.1109/iccv.1999.791280.
Full text