Journal articles on the topic 'Motion data'

To see the other types of publications on this topic, follow the link: Motion data.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Motion data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lv, Na, Yan Huang, Zhi Quan Feng, and Jing Liang Peng. "A Survey on Motion Capture Data Retrieval." Applied Mechanics and Materials 556-562 (May 2014): 2944–47. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.2944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the rapid development of motion capture technology, large motion capture databases are established. How to effectively retrieve the motions from huge amounts of motion data has become a hot topic in computer animation. In this paper, we give a survey on current motion capture data retrieval methods and point out some still existing problems at the end.
2

Chiu, H. C., F. J. Wu, C. J. Lin, H. C. Huang, and C. C. Liu. "Effects of rotation motions on strong-motion data." Journal of Seismology 16, no. 4 (April 1, 2012): 829–38. http://dx.doi.org/10.1007/s10950-012-9301-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Manns, Martin, Michael Otto, and Markus Mauer. "Measuring Motion Capture Data Quality for Data Driven Human Motion Synthesis." Procedia CIRP 41 (2016): 945–50. http://dx.doi.org/10.1016/j.procir.2015.12.068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dong, Ran, Dongsheng Cai, and Soichiro Ikuno. "Motion Capture Data Analysis in the Instantaneous Frequency-Domain Using Hilbert-Huang Transform." Sensors 20, no. 22 (November 16, 2020): 6534. http://dx.doi.org/10.3390/s20226534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Motion capture data are widely used in different research fields such as medical, entertainment, and industry. However, most motion researches using motion capture data are carried out in the time-domain. To understand human motion complexities, it is necessary to analyze motion data in the frequency-domain. In this paper, to analyze human motions, we present a framework to transform motions into the instantaneous frequency-domain using the Hilbert-Huang transform (HHT). The empirical mode decomposition (EMD) that is a part of HHT decomposes nonstationary and nonlinear signals captured from the real-world experiments into pseudo monochromatic signals, so-called intrinsic mode function (IMF). Our research reveals that the multivariate EMD can decompose complicated human motions into a finite number of nonlinear modes (IMFs) corresponding to distinct motion primitives. Analyzing these decomposed motions in Hilbert spectrum, motion characteristics can be extracted and visualized in instantaneous frequency-domain. For example, we apply our framework to (1) a jump motion, (2) a foot-injured gait, and (3) a golf swing motion.
5

Spratt, D. "SP-0030 Against the motion: Data, data, data." Radiotherapy and Oncology 158 (May 2021): S20. http://dx.doi.org/10.1016/s0167-8140(21)06470-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

I-Chen Lin, Jen-Yu Peng, Chao-Chih Lin, and Ming-Han Tsai. "Adaptive Motion Data Representation with Repeated Motion Analysis." IEEE Transactions on Visualization and Computer Graphics 17, no. 4 (April 2011): 527–38. http://dx.doi.org/10.1109/tvcg.2010.87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Et.al, JIBUM JUNG. "Use of Human Motion Data to Train Wearable Robots." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 11, 2021): 807–11. http://dx.doi.org/10.17762/turcomat.v12i6.2100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Development of wearable robots is accelerating. Walking robots mimic human behavior and must operate without accidents. Human motion data are needed to train these robots. We developed a system for extracting human motion data and displaying them graphically.We extracted motion data using a Perception Neuron motion capture system and used the Unity engine for the simulation. Several experiments were performed to demonstrate the accuracy of the extracted motion data.Of the various methods used to collect human motion data, markerless motion capture is highly inaccurate, while optical motion capture is very expensive, requiring several high-resolution cameras and a large number of markers. Motion capture using a magnetic field sensor is subject to environmental interference. Therefore, we used an inertial motion capture system. Each movement sequence involved four and was repeated 10 times. The data were stored and standardized. The motions of three individuals were compared to those of a reference person; the similarity exceeded 90% in all cases. Our rehabilitation robot accurately simulated human movements: individually tailored wearable robots could be designed based on our data. Safe and stable robot operation can be verified in advance via simulation. Walking stability can be increased using walking robots trained via machine learning algorithms.
8

Kashima, T. "Characteristics of Ground Motions and Strong Motion Data of Buildings." Concrete Journal 50, no. 1 (2012): 19–22. http://dx.doi.org/10.3151/coj.50.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Miura, Takeshi, Takaaki Kaiga, Naho Matsumoto, Hiroaki Katsura, Takeshi Shibata, Katsubumi Tajima, and Hideo Tamamoto. "Characterization of Motion Capture Data by Motion Speed Variation." IEEJ Transactions on Electronics, Information and Systems 133, no. 4 (2013): 906–7. http://dx.doi.org/10.1541/ieejeiss.133.906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Choi, Myung Geol, and Taesoo Kwon. "Motion rank: applying page rank to motion data search." Visual Computer 35, no. 2 (March 27, 2018): 289–300. http://dx.doi.org/10.1007/s00371-018-1498-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Southwick, Matthew, Zhu Mao, and Christopher Niezrecki. "Volumetric Motion Magnification: Subtle Motion Extraction from 4D Data." Measurement 176 (May 2021): 109211. http://dx.doi.org/10.1016/j.measurement.2021.109211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Xiaoting, Xiangxu Meng, Chenglei Yang, and Junqing Zhang. "Data Driven Avatars Roaming in Digital Museum." International Journal of Virtual Reality 8, no. 3 (January 1, 2009): 13–18. http://dx.doi.org/10.20870/ijvr.2009.8.3.2736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper describes a motion capture (mocap) data-driven digital museum roaming system with high walking reality. We focus on three main questions: the animation of avatars; the path planning; and the collision detection among avatars. We use only a few walking clips from mocap data to synthesize walking motions with natural transitions, any direction and any length. Let the avatars roam in the digital museum with its Voronoi skeleton path, shortest path or offset path. And also we use Voronoi diagram to do collision detection. Different users can set up their own avatars and roam along their own path. We modify the motion graph method by classify the original mocap data and set up their motion graph which can improve search efficiency greatly.
13

Riaz, Qaiser, Björn Krüger, and Andreas Weber. "Relational databases for motion data." International Journal of Innovative Computing and Applications 7, no. 3 (2016): 119. http://dx.doi.org/10.1504/ijica.2016.078723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Benický, Peter, and Ladislav Jurišica. "Real Time Motion Data Preprocessing." Journal of Electrical Engineering 61, no. 4 (July 1, 2010): 247–51. http://dx.doi.org/10.2478/v10187-010-0035-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Real Time Motion Data PreprocessingThere is a lot of redundant data for image processing in an image, in motion picture as well. The more data for image processing we have, the more time is needed for preprocessing it. That is why we need to work with important data only. In order to identify or classify motion, data processing in real time is needed.
15

Lee, W. H. K., and T. C. Shin. "Strong-Motion Instrumentation and Data." Earthquake Spectra 17, no. 1_suppl (April 2001): 5–18. http://dx.doi.org/10.1193/1.1586190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ozturk, C., J. A. Derbyshire, and E. R. McVeigh. "Estimating motion from MRI data." Proceedings of the IEEE 91, no. 10 (October 2003): 1627–48. http://dx.doi.org/10.1109/jproc.2003.817872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

L., Jagath J., Shalij P. R, and Haris Naduthodi. "Motion Picture Data Management System." International Journal of Business Innovation and Research 1, no. 1 (2020): 1. http://dx.doi.org/10.1504/ijbir.2020.10034213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bae, Tae Young, and Young Seog Kim. "Generation and Animation of Optimal Robot Joint Motion data using Captured Human Motion data." Journal of manufacturing engineering & technology 22, no. 3_1spc (June 15, 2013): 558–65. http://dx.doi.org/10.7735/ksmte.2013.22.3.558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cousins, Jim, and Graeme H. McVerry. "Overview of strong-motion data from the Darfield earthquake." Bulletin of the New Zealand Society for Earthquake Engineering 43, no. 4 (December 31, 2010): 222–27. http://dx.doi.org/10.5459/bnzsee.43.4.222-227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Darfield earthquake of 3rd September 2010 UT and its aftershocks have yielded New Zealand’s richest set of strong-motion data since recording began in the early 1960s. Main-shock accelerograms were returned by 130 sites, ten of which had peak horizontal accelerations in the range 0.3 to 0.82g. One near-fault record, from Greendale, had a peak vertical acceleration of 1.26g. Eighteen records showed peak ground velocities exceeding 0.5 m/s, with three of them exceeding 1 m/s. The records included some with strong long-period directivity pulses, some with other long-period components that were related to a mixture of source and site effects, and some that exhibited the effects of liquefaction at their sites. There were marked differences between records on the deep alluvium of Christchurch City and the Canterbury Plains, and those on shallow stiff soil sites. The strong-motion records provide the opportunity to assess the effects of the earthquake in terms of the ground motions and their relationship to design motions. They also provide an invaluable set of near-source motions for seismological studies. Our report presents an overview of the records and some preliminary findings derived from them.
20

Li, Bo, Chao Zhang, Cheng Han, and Baoxing Bai. "Gesture Recognition Based on Kinect v2 and Leap Motion Data Fusion." International Journal of Pattern Recognition and Artificial Intelligence 33, no. 05 (April 8, 2019): 1955005. http://dx.doi.org/10.1142/s021800141955005x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study proposed a method for multiple motion-sensitive devices (i.e. one Kinect v2 and two Leap Motions) to integrate gesture data in Unity. Other depth cameras could replace the Kinect. The general steps in integrating gesture data for motion-sensitive devices were introduced as follows. (1) A method was proposed to recognize the fingertip from depth images for the Kinect v2. (2) Coordinates observed by three motion-sensitive devices were aligned in space in three steps. First, preliminary coordinate conversion parameters were obtained through joint calibration of the three devices. Second, two types of devices were approached to the observed value of the standard Leap Motion by the least squares method twice (i.e. one Kinect and one Leap Motion on the first round, then two Leap Motions on the second round). (3) Data of the three devices were aligned with time by using Unity while applying the data plan. On this basis, a human hand interacted with a virtual object in Unity. Experimental results demonstrated that the proposed method had a small recognition error of hand joints and realized the natural interaction between the human hand and virtual objects.
21

Wei, Xiaopeng, Boxiang Xiao, and Qiang Zhang. "A retrieval method for human Mocap data based on biomimetic pattern recognition." Computer Science and Information Systems 7, no. 1 (2010): 99–109. http://dx.doi.org/10.2298/csis1001099w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A retrieval method for human Mocap (Motion Capture) data based on biomimetic pattern recognition is presented in this paper. BVH rotation channels are extracted as features of motion for both the retrieval instance and the motion data. Several hyper sausage neurons are constructed according to the retrieval instance, and the trained domain covered by these hyper sausage neurons can be considered as the distribution range of a same kind of motions. By use of CMU free motion database, the retrieval algorithm has been implemented and examined, and the experimental results are illustrated. At the same time, the main contributions and limitations are discussed.
22

YAMANE, KATSU, JESSICA K. HODGINS, and H. BENJAMIN BROWN. "CONTROLLING A MOTORIZED MARIONETTE WITH HUMAN MOTION CAPTURE DATA." International Journal of Humanoid Robotics 01, no. 04 (December 2004): 651–69. http://dx.doi.org/10.1142/s0219843604000319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we present a method for controlling a motorized, string-driven marionette using motion capture data from human actors and from a traditional marionette operated by a professional puppeteer. We are interested in using motion capture data of a human actor to control the motorized marionette as a way of easily creating new performances. We use data from the hand-operated marionette both as a way of assessing the performance of the motorized marionette and to explore whether this technology could be used to preserve marionette performances. The human motion data must be extensively adapted for the marionette because its kinematic and dynamic properties differ from those of the human actor in degrees of freedom, limb length, workspace, mass distribution, sensors, and actuators. The motion from the hand-operated marionette requires less adaptation because the controls and dynamics are a closer match. Both data sets are adapted using an inverse kinematics algorithm that takes into account marker positions, joint motion ranges, string constraints, and potential energy. We also apply a feedforward controller to prevent extraneous swings of the hands. Experimental results show that our approach enables the marionette to perform motions that are qualitatively similar to the original human motion capture data.
23

Igarashi, Ko, and Seiichiro Katsura. "Motion-Data Processing and Reproduction Based on Motion-Copying System." IEEJ Journal of Industry Applications 4, no. 5 (2015): 543–49. http://dx.doi.org/10.1541/ieejjia.4.543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gu, Qin, Jingliang Peng, and Zhigang Deng. "Compression of Human Motion Capture Data Using Motion Pattern Indexing." Computer Graphics Forum 28, no. 1 (March 2009): 1–12. http://dx.doi.org/10.1111/j.1467-8659.2008.01309.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Huifang, John B. Weaver, Marvin M. Doyley, Francis E. Kennedy, and Keith D. Paulsen. "Optimized motion estimation for MRE data with reduced motion encodes." Physics in Medicine and Biology 53, no. 8 (April 3, 2008): 2181–96. http://dx.doi.org/10.1088/0031-9155/53/8/012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

YASUDA, H., R. KAIHARA, S. SAITO, and M. NAKAJIMA. "Motion Belts: Visualization of Human Motion Data on a Timeline." IEICE Transactions on Information and Systems E91-D, no. 4 (April 1, 2008): 1159–67. http://dx.doi.org/10.1093/ietisy/e91-d.4.1159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yi, P., Q. Zhang, and X. Wei. "Laplacian coordinates-based motion transition for data-driven motion synthesis." IET Image Processing 6, no. 9 (December 1, 2012): 1331–37. http://dx.doi.org/10.1049/iet-ipr.2012.0186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Xintian, Wenjun Xie, Shujie Li, and Xiaoping Liu. "Convolutional Neural Networks Based Motion Data Optimization Networks forLeap Motion." Journal of Computer-Aided Design & Computer Graphics 33, no. 3 (March 1, 2021): 439–47. http://dx.doi.org/10.3724/sp.j.1089.2021.18425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Panzhin, Andrei, and Nataliia Panzhina. "Applying primary data from permanent stations for geodynamic zoning." Izvestiya vysshikh uchebnykh zavedenii. Gornyi zhurnal, no. 1 (February 17, 2021): 54–62. http://dx.doi.org/10.21440/0536-1028-2021-1-54-62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction. The article focuses on present-day geodynamic motion in order to carry out geodynamic zoning of territories. Geodynamic monitoring may be both regional, for instance, of the Russian Federation, Ural region, geological rock mass, and it may also be local, i.e. covering a deposit and enclosing rock mass. Permanent stations of Global Navigation Satellite System (GNSS) have been used as a source of data for deformation monitoring. Methodology included the method of visualizing geodynamic motions according to the results of cyclic geodetic measurements which makes it possible to single out active geological structures, blocks, and tectonic faults on reasonable grounds. Results. It has been shown that it is advisable to use not modules of observation station displacement vector values but their velocities reduced to an annual cycle as a key source of information on geodynamic motion at large spatial-temporal bases. It has been indicated that an important characteristic of geodynamic motion vector field is divergence which characterizes the degree of convergence or divergence of a vector flux. Summary. Basic theses have been identified of the method of present-day geodynamic motions monitoring and visualization in the form of a vector field according to the results of cyclic geodetic measurements. Based on experimental data, it has been determined that the present-day geodynamic motion is vortical being the indicator of active tectonic faulting.
30

Ebel, John E., and David J. Wald. "Bayesian Estimations of Peak Ground Acceleration and 5% Damped Spectral Acceleration from Modified Mercalli Intensity Data." Earthquake Spectra 19, no. 3 (August 2003): 511–29. http://dx.doi.org/10.1193/1.1596549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We describe a new probabilistic method that uses observations of modified Mercalli intensity (MMI) from past earthquakes to make quantitative estimates of ground shaking parameters (i.e., peak ground acceleration, peak ground velocity, 5% damped spectral acceleration values, etc.). The method uses a Bayesian approach to make quantitative estimates of the probabilities of different levels of ground motions from intensity data given an earthquake of known location and magnitude. The method utilizes probability distributions from an intensity/ground motion data set along with a ground motion attenuation relation to estimate the ground motion from intensity. The ground motions with the highest probabilities are the ones most likely experienced at the site of the MMI observation. We test the method using MMI/ground motion data from California and published ground motion attenuation relations to estimate the ground motions for several earthquakes: 1999 Hector Mine, California (M7.1); 1988 Saguenay, Quebec (M5.9); and 1982 Gaza, New Hampshire (M4.4). In an example where the method is applied to a historic earthquake, we estimate that the peak ground accelerations associated with the 1727 (M∼5.2) earthquake at Newbury, Massachusetts, ranged from 0.23 g at Newbury to 0.06 g at Boston.
31

Butil, John Carlo M., Ma Lei Frances Magsisi, John Hart Pua, Prince Kevin Se, and Ria Sagum. "The Application of Genetic Algorithm in Motion Detection for Data Storage Optimization." International Journal of Computer and Communication Engineering 3, no. 3 (2014): 199–202. http://dx.doi.org/10.7763/ijcce.2014.v3.319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wielen, Roland. "A Comprehensive Astrometric Data Base: An Instrument for Combining Earth-Bound Observations With Hipparcos Data." Symposium - International Astronomical Union 141 (1990): 483–88. http://dx.doi.org/10.1017/s0074180900087386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The astrometric data bank ARIGFH will contain all relevant astrometric data on stellar positions and proper motions of stars from ground-based observations and space missions. For each star in the ARIGFH, the best available position and proper motion shall be derived. We rediscuss the accuracy of proper motions and positions of fundamental stars, resulting from a combination of data in the FK5 with the expected results from a revised HIPPARCOS mission. The FK5 data could be significantly improved even by rather degraded positions from a revised HIPPARCOS mission.
33

Gelbard, Roy, and Israel Spiegler. "Representation and Storage of Motion Data." Journal of Database Management 13, no. 3 (July 2002): 46–63. http://dx.doi.org/10.4018/jdm.2002070104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Harada, Kensuke, Natsuki Yamanobe, Weiwei Wan, Kazuyuki Nagata, Ixchel G. Ramirez-Alpizar, and Tokuo Tsuji. "Motion-Data Driven Grasp/Assembly Planner." Journal of Robotics, Networking and Artificial Life 5, no. 4 (2019): 232. http://dx.doi.org/10.2991/jrnal.k.190220.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Makarov, Valeri V., and Alexey Goldin. "VARIABILITY-INDUCED MOTION IN KEPLER DATA." Astrophysical Journal Supplement Series 224, no. 2 (June 2, 2016): 19. http://dx.doi.org/10.3847/0067-0049/224/2/19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Liang, Peter, Yingzhen Yang, and Yang Cai. "Pattern mining from saccadic motion data." Procedia Computer Science 1, no. 1 (May 2010): 2529–38. http://dx.doi.org/10.1016/j.procs.2010.04.286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Guang Jiang, Hung-Tat Tsui, and Long Quan. "Circular motion geometry using minimal data." IEEE Transactions on Pattern Analysis and Machine Intelligence 26, no. 6 (June 2004): 721–31. http://dx.doi.org/10.1109/tpami.2004.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Zhuorong, Hongchuan Yu, Hai Dang Kieu, Tung Long Vuong, and Jian Jun Zhang. "PCA-Based Robust Motion Data Recovery." IEEE Access 8 (2020): 76980–90. http://dx.doi.org/10.1109/access.2020.2989744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ramsay, J. O., K. G. Munhall, V. L. Gracco, and D. J. Ostry. "Functional data analyses of lip motion." Journal of the Acoustical Society of America 97, no. 5 (May 1995): 3402. http://dx.doi.org/10.1121/1.412516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Tautges, Jochen, Arno Zinke, Björn Krüger, Jan Baumann, Andreas Weber, Thomas Helten, Meinard Müller, Hans-Peter Seidel, and Bernd Eberhardt. "Motion reconstruction using sparse accelerometer data." ACM Transactions on Graphics 30, no. 3 (May 2011): 1–12. http://dx.doi.org/10.1145/1966394.1966397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Takebayashi, Yusuke, Koji Nishio, and Ken-ichi Kobori. "Similarity Retrieval for Motion Capture Data." Journal of The Institute of Image Information and Television Engineers 62, no. 9 (2008): 1420–26. http://dx.doi.org/10.3169/itej.62.1420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Harada, Kensuke, Natsuki Yamanobe, Weiwei Wan, Kazuyuki Nagata, Ixchel G. Ramirez-Alpizar, and Tokuo Tsuji. "Motion-Data Driven Grasp/Assembly Planner." Proceedings of International Conference on Artificial Life and Robotics 24 (January 10, 2019): 1–4. http://dx.doi.org/10.5954/icarob.2019.ps-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kawata, D., J. A. S. Hunt, R. J. J. Grand, A. Siebert, S. Pasetto, and M. Cropper. "Stellar Motion around Spiral Arms:GaiaMock Data." EAS Publications Series 67-68 (2014): 247–50. http://dx.doi.org/10.1051/eas/1567044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ramsay, J. O., K. G. Munhall, V. L. Gracco, and D. J. Ostry. "Functional data analyses of lip motion." Journal of the Acoustical Society of America 99, no. 6 (June 1996): 3718–27. http://dx.doi.org/10.1121/1.414986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Peterson, James. "Introducing Rotational Motion with EXIF Data." Physics Teacher 49, no. 7 (October 2011): 440–41. http://dx.doi.org/10.1119/1.3639156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Balazia, Michal, and Petr Sojka. "Gait Recognition from Motion Capture Data." ACM Transactions on Multimedia Computing, Communications, and Applications 14, no. 1s (April 2, 2018): 1–18. http://dx.doi.org/10.1145/3152124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Etemadpour, Ronak, and Angus Graeme Forbes. "Density-based motion." Information Visualization 16, no. 1 (July 26, 2016): 3–20. http://dx.doi.org/10.1177/1473871615606187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A common strategy for encoding multidimensional data for visual analysis is to use dimensionality reduction techniques that project data from higher dimensions onto a lower-dimensional space. This article examines the use of motion to retain an accurate representation of the point density of clusters that might otherwise be lost when a multidimensional dataset is projected into a two-dimensional space. Specifically, we consider different types of density-based motion, where the magnitude of the motion is directly related to the density of the clusters. We investigate how users interpret motion in two-dimensional scatterplots and whether or not they are able to effectively interpret the point density of the clusters through motion. We conducted a series of user studies with both synthetic and real-world datasets to explore how motion can help users in completing various multidimensional data analysis tasks. Our findings indicate that for some tasks, motion outperforms the static scatterplots; circular path motions in particular give significantly better results compared to the other motions. We also found that users were easily able to distinguish clusters with different densities as long the magnitudes of motion were above a particular threshold. Our results indicate that incorporating density-based motion into visualization analytics systems effectively enables the exploration and analysis of multidimensional datasets.
48

Cha, Moo-Hyun, and Soon-Hung Han. "A Data Driven Motion Generation for Driving Simulators Using Motion Texture." Transactions of the Korean Society of Mechanical Engineers A 31, no. 7 (July 1, 2007): 747–55. http://dx.doi.org/10.3795/ksme-a.2007.31.7.747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Sedmidubsky, Jan, Petr Elias, and Pavel Zezula. "Searching for variable-speed motions in long sequences of motion capture data." Information Systems 80 (February 2019): 148–58. http://dx.doi.org/10.1016/j.is.2018.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Hai, Hwa Jen Yap, and Selina Khoo. "Motion Classification and Features Recognition of a Traditional Chinese Sport (Baduanjin) Using Sampled-Based Methods." Applied Sciences 11, no. 16 (August 19, 2021): 7630. http://dx.doi.org/10.3390/app11167630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study recognized the motions and assessed the motion accuracy of a traditional Chinese sport (Baduanjin), using the data from the inertial sensor measurement system (IMU) and sampled-based methods. Fifty-three participants were recruited in two batches to participate in the study. Motion data of participants practicing Baduanjin were captured by IMU. By extracting features from motion data and benchmarking with the teacher’s assessment of motion accuracy, this study verifies the effectiveness of assessment on different classifiers for motion accuracy of Baduanjin. Moreover, based on the extracted features, the effectiveness of Baduanjin motion recognition on different classifiers was verified. The k-Nearest Neighbor (k-NN), as a classifier, has advantages in accuracy (more than 85%) and a short average processing time (0.008 s) during assessment. In terms of recognizing motions, the classifier One-dimensional Convolutional Neural Network (1D-CNN) has the highest accuracy among all verified classifiers (99.74%). The results show, using the extracted features of the motion data captained by IMU, that selecting an appropriate classifier can effectively recognize the motions and, hence, assess the motion accuracy of Baduanjin.

To the bibliography