Academic literature on the topic 'Motion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Motion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Motion"

1

Park, Woojin, Don B. Chaffin, and Bernard J. Martin. "A Motion Modification Algorithm for Memory-Based Human Motion Simulation." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 13 (September 2002): 1172–75. http://dx.doi.org/10.1177/154193120204601335.

Full text
Abstract:
Simulating human motions in the virtual CAD world is important in the computerized ergonomic design of products and workplaces. The present study introduces a novel, memory-based approach for simulating realistic human motions and presents a motion modification algorithm. in this novel approach, realistic human motions are simulated by modifying existing motion samples stored in a motion database. The proposed motion modification algorithm was found to be able to simulate human motions accurately. The memory-based motion simulation approach has advantages over existing simulation models as it can simulate qualitatively different types of motions on a single platform, predict motions of different styles, and continually learn new motions.
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Bo, Hao Peng, Minjing Dong, Yi Ren, Yixuan Shen, and Chang Xu. "AMD: Autoregressive Motion Diffusion." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2022–30. http://dx.doi.org/10.1609/aaai.v38i3.27973.

Full text
Abstract:
Human motion generation aims to produce plausible human motion sequences according to various conditional inputs, such as text or audio. Despite the feasibility of existing methods in generating motion based on short prompts and simple motion patterns, they encounter difficulties when dealing with long prompts or complex motions. The challenges are two-fold: 1) the scarcity of human motion-captured data for long prompts and complex motions. 2) the high diversity of human motions in the temporal domain and the substantial divergence of distributions from conditional modalities, leading to a many-to-many mapping problem when generating motion with complex and long texts. In this work, we address these gaps by 1) elaborating the first dataset pairing long textual descriptions and 3D complex motions (HumanLong3D), and 2) proposing an autoregressive motion diffusion model (AMD). Specifically, AMD integrates the text prompt at the current timestep with the text prompt and action sequences at the previous timestep as conditional information to predict the current action sequences in an iterative manner. Furthermore, we present its generalization for X-to-Motion with “No Modality Left Behind”, enabling for the first time the generation of high-definition and high-fidelity human motions based on user-defined modality input.
APA, Harvard, Vancouver, ISO, and other styles
3

Lim, Dae-Young, Hyun-Jin Kwak, and Young-Jae Ryoo. "Motion Editing Tool to Create Dancing Motions of Humanoid Robot." International Journal of Humanoid Robotics 11, no. 04 (December 2014): 1442002. http://dx.doi.org/10.1142/s021984361442002x.

Full text
Abstract:
In this paper, a motion editing tool to create dancing motions of a humanoid robot is proposed. In order to build performances or dancing of a humanoid robot, a motion editing tool to create specific motions is necessary. Especially, to generate more natural motions is required for a dancing robot. We proposed a motion editing tool and algorithm to create the natural motions. The proposed motion editing tool can create robot's motions composed of several steps which are captured from every joint while the robot plays. The motion editing tool generates the continuous motion interpolated between each steps. A humanoid robot of 50 cm tall is developed to test the proposed tool. The robot using the motion editing tool was demonstrated the natural dancing performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Kahveci̇, Derya, and Yusuf Yayli. "Persistent rigid-body motions on slant helices." International Journal of Geometric Methods in Modern Physics 16, no. 12 (November 29, 2019): 1950193. http://dx.doi.org/10.1142/s0219887819501937.

Full text
Abstract:
This paper reviews the persistent rigid-body motions and examines the geometric conditions of the persistence of some special frame motions on a slant helix. Unlike the Frenet–Serret motion on general helices, the Frenet–Serret motion on slant helices can be persistent. Moreover, even the adapted frame motion on slant helices can be persistent. This paper begins by explaining one-dimensional rigid-body motions and persistent motions. Then, it continues to present persistent frame motions in terms of their instantaneous twists and axode surfaces. Accordingly, the persistence of any frame motions attached to a curve can be characterized by the pitch of an instantaneous twist. This work investigates different frame motions that are persistent, namely frame motions whose instantaneous twist has a constant pitch. In particular, by expressing the connection between the pitch of Frenet–Serret motion and the pitch of adapted frame motion, it demonstrates that both the Frenet–Serret motion and the adapted frame motion are persistent on slant helices.
APA, Harvard, Vancouver, ISO, and other styles
5

Choi, Hee-Eun, and Jung-Il Jun. "Development of an Estimation Formula to Evaluate Mission Motion Suitability of Military Jackets." Applied Sciences 11, no. 19 (September 30, 2021): 9129. http://dx.doi.org/10.3390/app11199129.

Full text
Abstract:
We developed an estimation formula for mission motion suitability evaluation based on the general motion protocol to evaluate the motion suitability of a tracked vehicle crew jacket. Motion suitability evaluation was conducted for the 9 general motions and 12 mission motions among 27 tracked vehicle crew members who wore a tracked vehicle crew jacket. We conducted correlation and factor analyses on motions to extract the main mission motions, and a multiple regression analysis was performed on major mission motions using general motions as independent variables. As a result, two mission behavior factors related to ammunition stowing and boarding/entry were extracted. We selected ammunition stowing I and the boarding motion, which has the highest factor load in each factor and the highest explanatory power (R2) of the estimation formula. Regression equations for ammunition stowing consisting of five general motions (p < 0.001) and for boarding motion (p < 0.01) consisting of one general motion could be obtained. In conclusion, the estimation formula for mission motion suitability using general motion is beneficial for enhancing the effectiveness of the evaluation of military jackets for tracked vehicle crews.
APA, Harvard, Vancouver, ISO, and other styles
6

Takahashi, Nobuko. "Effect of Spatial Configuration of Motion Signals on Motion Integration across Space." Swiss Journal of Psychology 63, no. 3 (September 2004): 173–82. http://dx.doi.org/10.1024/1421-0185.63.3.173.

Full text
Abstract:
The present study examined the effect of the spatial configuration of local signals on motion integration across space. The perceived coherency was measured in different configurations of apertures and combinations of motion directions. The results showed the following. (1) Motion integration across separate apertures is affected by the spatial configuration of the apertures. The perceived coherency was highest when the apertures were arranged symmetrically with respect to the coherent direction. (2) Though the spatial configuration of apertures are the same, the assignment of each local motion to each apertures has an effect, and converging local motions are integrated more than diverging local motions. (3) There is a limit to the direction difference of local motions. These results suggest that the spatial structure of global motion behind apertures has a considerable effect on the integration of local motions in apertures.
APA, Harvard, Vancouver, ISO, and other styles
7

Ichnowski, Jeffrey, Yahav Avigal, Vishal Satish, and Ken Goldberg. "Deep learning can accelerate grasp-optimized motion planning." Science Robotics 5, no. 48 (November 18, 2020): eabd7710. http://dx.doi.org/10.1126/scirobotics.abd7710.

Full text
Abstract:
Robots for picking in e-commerce warehouses require rapid computing of efficient and smooth robot arm motions between varying configurations. Recent results integrate grasp analysis with arm motion planning to compute optimal smooth arm motions; however, computation times on the order of tens of seconds dominate motion times. Recent advances in deep learning allow neural networks to quickly compute these motions; however, they lack the precision required to produce kinematically and dynamically feasible motions. While infeasible, the network-computed motions approximate the optimized results. The proposed method warm starts the optimization process by using the approximate motions as a starting point from which the optimizing motion planner refines to an optimized and feasible motion with few iterations. In experiments, the proposed deep learning–based warm-started optimizing motion planner reduces compute and motion time when compared to a sampling-based asymptotically optimal motion planner and an optimizing motion planner. When applied to grasp-optimized motion planning, the results suggest that deep learning can reduce the computation time by two orders of magnitude (300×), from 29 s to 80 ms, making it practical for e-commerce warehouse picking.
APA, Harvard, Vancouver, ISO, and other styles
8

Shu, Yong, Feng Shi, Wei Ran Duan, and Sheng Yi Li. "Compare Study between Planet Motion and Orbital Motion in CCOS." Advanced Materials Research 662 (February 2013): 595–98. http://dx.doi.org/10.4028/www.scientific.net/amr.662.595.

Full text
Abstract:
In order to get a profound understanding of planet motion and orbital motion in CCOS (Computer Controlled Optical Surfacing), a compare study between them was conducted here. The material removals of two motions under the same conditions were simulated and the removal of planet motion was higher than that of orbital motion. The figuring abilities of two motions were also studied through the theory of cut-off frequency and the result showed that planet motion had a higher cut-off frequency. Then two figuring runs which employ the planet motion and the orbital motion were simulated. The convergence rates and polishing times of these two runs were compared and the result showed that planet motion had a higher figuring efficiency. As planet motion has stronger figuring ability and higher figuring efficiency, it’s better to employ planet motion in CCOS to get higher convergence rate and higher accuracy when fabricating high quality mirrors.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Sharon, Jiaju Ma, Jiajun Wu, Daniel Ritchie, and Maneesh Agrawala. "Editing Motion Graphics Video via Motion Vectorization and Transformation." ACM Transactions on Graphics 42, no. 6 (December 5, 2023): 1–13. http://dx.doi.org/10.1145/3618316.

Full text
Abstract:
Motion graphics videos are widely used in Web design, digital advertising, animated logos and film title sequences, to capture a viewer's attention. But editing such video is challenging because the video provides a low-level sequence of pixels and frames rather than higher-level structure such as the objects in the video with their corresponding motions and occlusions. We present a motion vectorization pipeline for converting motion graphics video into an SVG motion program that provides such structure. The resulting SVG program can be rendered using any SVG renderer (e.g. most Web browsers) and edited using any SVG editor. We also introduce a program transformation API that facilitates editing of a SVG motion program to create variations that adjust the timing, motions and/or appearances of objects. We show how the API can be used to create a variety of effects including retiming object motion to match a music beat, adding motion textures to objects, and collision preserving appearance changes.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Jiaman, Jiajun Wu, and C. Karen Liu. "Object Motion Guided Human Motion Synthesis." ACM Transactions on Graphics 42, no. 6 (December 5, 2023): 1–11. http://dx.doi.org/10.1145/3618333.

Full text
Abstract:
Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. In real-world scenarios, humans frequently interact with the environment and manipulate various objects to complete daily tasks. In this work, we study the problem of full-body human motion synthesis for the manipulation of large-sized objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion. Since naively applying diffusion models fails to precisely enforce contact constraints between the hands and the object, OMOMO learns two separate denoising processes to first predict hand positions from object motion and subsequently synthesize full-body poses based on the predicted hand positions. By employing the hand positions as an intermediate representation between the two denoising processes, we can explicitly enforce contact constraints, resulting in more physically plausible manipulation motions. With the learned model, we develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Through extensive experiments, we demonstrate the effectiveness of our proposed pipeline and its ability to generalize to unseen objects. Additionally, as high-quality human-object interaction datasets are scarce, we collect a large-scale dataset consisting of 3D object geometry, object motion, and human motion. Our dataset contains human-object interaction motion for 15 objects, with a total duration of approximately 10 hours.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Motion"

1

Leung, Johahn. "Auditory Motion in Motion." Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/15944.

Full text
Abstract:
This thesis describes a number of studies conducted to examine three different facets of horizontal motion processing in the auditory system. Firstly, when a sound moved around a stationary listener (“source motion”); secondly, when subjects engaged in head rotations while sources remained stationary (“self motion”) and lastly when subjects engaged in self motion during simultaneous source motion. Previous studies in the field have explored these issues separately, and much remains unknown. For “source motion”, a localisation based “snapshot” psychophysical model remains the most commonly used narrative in describing this process, given the lack of clarity about the neural pathways underlying motion perception. However, it remains unclear whether (or how) such a framework can generalise to different stimulus conditions. For “self motion”, studies reported here have considered the sensory implications of head motion in the presence of a stationary sound, questioning how auditory spatial perception remains stable and exploring the perceptual benefits from dynamic localisation cues. Yet, the underlying interactions between audition and the head motor plant remain unclear, particularly at the faster head turn velocities. Lastly, there is a scarcity of studies probing the how listeners perceive a moving source during simultaneous self motion, even though it encapsulates concepts in both self and source motion, providing a unique opportunity to help frame our understanding of the sensorimotor mechanisms involved. We addressed these questions with three psychophysical experiments, and proposed a leaky integrative framework as an alternative to the “snapshot” model.
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Xin. "Feature-based motion estimation and motion segmentation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0016/MQ55493.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Mingyu. "Universal motion-based control and motion recognition." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50281.

Full text
Abstract:
In this dissertation, we propose a universal motion-based control framework that supports general functionalities on 2D and 3D user interfaces with a single integrated design. We develop a hybrid framework of optical and inertial sensing technologies to track 6-DOF (degrees of freedom) motion of a handheld device, which includes the explicit 6-DOF (position and orientation in the global coordinates) and the implicit 6-DOF (acceleration and angular speed in the device-wise coordinates). Motion recognition is another key function of the universal motion-based control and contains two parts: motion gesture recognition and air-handwriting recognition. The interaction technique of each task is carefully designed to follow a consistent mental model and ensure the usability. The universal motion-based control achieves seamless integration of 2D and 3D interactions, motion gestures, and air-handwriting. Motion recognition by itself is a challenging problem. For motion gesture recognition, we propose a normalization procedure to effectively address the large in-class motion variations among users. The main contribution is the investigation of the relative effectiveness of various feature dimensions (of tracking signals) for motion gesture recognition in both user-dependent and user-independent cases. For air-handwriting recognition, we first develop a strategy to model air-handwriting with basic elements of characters and ligatures. Then, we build word-based and letter-based decoding word networks for air-handwriting recognition. Moreover, we investigate the detection and recognition of air-fingerwriting as an extension to air-handwriting. To complete the evaluation of air-handwriting, we conduct usability study to support that air-handwriting is suitable for text input on a motion-based user interface.
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Zhidong. "Motion capture based motion analysis and motion synthesis for human-like character animation." Thesis, Bournemouth University, 2009. http://eprints.bournemouth.ac.uk/14590/.

Full text
Abstract:
Motion capture technology is recognised as a standard tool in the computer animation pipeline. It provides detailed movement for animators; however, it also introduces problems and brings concerns for creating realistic and convincing motion for character animation. In this thesis, the post-processing techniques are investigated that result in realistic motion generation. Anumber of techniques are introduced that are able to improve the quality of generated motion from motion capture data, especially when integrating motion transitions from different motion clips. The presented motion data reconstruction technique is able to build convincing realistic transitions from existing motion database, and overcome the inconsistencies introduced by traditional motion blending techniques. It also provides a method for animators to re-use motion data more efficiently. Along with the development of motion data transition reconstruction, the motion capture data mapping technique was investigated for skeletal movement estimation. The per-frame based method provides animators with a real-time and accurate solution for a key post-processing technique. Although motion capture systems capture physically-based motion for character animation, no physical information is included in the motion capture data file. Using the knowledge of biomechanics and robotics, the relevant information for the captured performer are able to be abstracted and a mathematical-physical model are able to be constructed; such information is then applied for physics-based motion data correction whenever the motion data is edited.
APA, Harvard, Vancouver, ISO, and other styles
5

Rajala, Juha. "Motion efter hjärtinfarkt 8 veckors regelbunden motions påverkan på konditionen." Thesis, Halmstad University, School of Business and Engineering (SET), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2467.

Full text
Abstract:

Studiens syfte var att undersöka hur man inleder fysiska tester och träning efter hjärtinfarkt samt

hur det påverkar konditionen och vilka faktorer som förstärker effekten av träning. Resultatet av

den empiriska undersökning avsåg att jämföras med resultatet av den forskning som andra

forskare utfört. Studien beskriver hjärtinfarkten vad gäller anatomi och patologi samt ger en

överblick över olika forskningsresultat inom området. En pilotstudie med en testperson

innehållande ett förtest, en träningsperiod och ett eftertest genomfördes.

Det stora antalet tester som gjorts visar sitt tydliga språk. Fysisk träning efter hjärtinfarkt är

gynnsamt och förbättrar återhämtningen avsevärt. Träningen ger förbättrad livskvalite i form av

större ork och uthållighet i vardagslivet. Desto mångsidigare motionsform kombinerat med

avslappnings- och andningsteknikövningar desto bättre resultat. Även den utförda pilotstudien

pekar åt samma håll.

APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Yanrui. "Synthesizing motion sequences from sample motions to satisfy environmental constraints." Scholarly Commons, 2014. https://scholarlycommons.pacific.edu/uop_etds/228.

Full text
Abstract:
Complex realistic human motion sequences satisfying environmental constraints can be created by motion capture, which is a reliable way to reproduce human motions. However, motion capture data is difficult to modify in order to obtain variant motion sequences for multiple tasks. In this thesis, a system for synthesizing complex realistic human motion sequences based on a small group of sample motions to satisfy constraints is proposed. Methods are proposed for the system to preprocesses raw motion capture data to create sample motions that can be easily modified for the purpose of meeting specific requirements, while maintaining the subtleties of the origin motion capture data. Methods for the system to scan user-input constraints, to choose the best sample motion and synthesize the motion sequence based on route affected by the constraint are also proposed. Each generated motion piece is blended with the default motion, and thus a motion sequence composed of several pieces of motion based on constraints will be generated. Artifacts that arise during motion generation are identified and handled properly. Experimental results will show that the system can create cyclical sample motions from motion capture data, generate motion pieces based on environmental constraints, and synthesize complex realistic human motion sequences.
APA, Harvard, Vancouver, ISO, and other styles
7

Rekleitis, Ioannis. "Visual motion estimation based on motion blur interpretation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ44103.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rekleitis, Ioannis. "Visual motion estimation based on motion blur interpretation." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20154.

Full text
Abstract:
When the relative velocity between the different objects in a scene and the camera is relative large---compared with the camera's exposure time---in the resulting image we have a distortion called motion blur. In the past, a lot of algorithms have been proposed for estimating the relative velocity from one or, most of the time, more images. The motion blur is generally considered an extra source of noise and is eliminated, or is assumed nonexistent. Unlike most of these approaches, it is feasible to estimate the Optical Flow map using only the information encoded in the motion blur. This thesis presents an algorithm that estimates the velocity vector of an image patch using the motion blur only, in two steps. The information used for the estimation of the velocity vectors is extracted from the frequency domain, and the most computationally expensive operation is the Fast Fourier Transform that transforms the image from the spatial to the frequency domain. Consequently, the complexity of the algorithm is bound by this operation into O( nlog(n)). The first step consists of using the response of a family of steerable filters applied on the log of the Power Spectrum in order to calculate the orientation of the velocity vector. The second step uses a technique called Cepstral Analysis. More precisely, the log power spectrum is treated as another signal and we examine the Inverse Fourier Transform of it in order to estimate the magnitude of the velocity vector. Experiments have been conducted on artificially blurred images and with real world data, and an error analysis on these results is also presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Niehorster, Diederick Christian. "The perception of object motion during self-motion." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/196466.

Full text
Abstract:
When we stand still and do not move our eyes and head, the motion of an object in the world or the absence thereof is directly given by the motion or quiescence of the retinal image. Self-motion through the world however complicates this retinal image. During self-motion, the whole retinal image undergoes coherent global motion, called optic flow. Self-motion therefore causes the retinal motion of objects moving in the world to be confounded by a motion component due to self-motion. How then do we perceive the motion of an object in the world when we ourselves are also moving? Although non-visual information about self-motion, such as provided by efference copies of motor commands and vestibular stimulation, might play a role in this ability, it has recently been shown that the brain possesses a purely visual mechanism that underlies scene-relative object motion perception during self-motion. In the flow parsing hypothesis developed by Rushton and Warren (2005; Warren & Rushton, 2007; 2009b), the brain uses its sensitivity to optic flow to detect and globally remove retinal motion due to self-motion and recover the scene-relative motion of objects. Research into this perceptual ability has so far been of a qualitative nature. In this thesis, I therefore develop a retinal motion nulling paradigm to measure the gain with which the flow parsing mechanism uses the optic flow to remove the self-motion component from an object’s retinal motion. I use this paradigm to investigate how accurate scene-relative object motion perception during self-motion can be based on only visual information, whether this flow parsing process depends on a percept of the direction of self-motion and the tuning of flow parsing, i.e., how it is modulated by changes in various stimulus aspects. The results reveal that although adding monocular or binocular depth information to the display to precisely specify the moving object’s 3D position in the scene improved the accuracy of flow parsing, the flow parsing gain was never up to the extent required by the scene geometry. Furthermore, the flow parsing gain was lower at higher eccentricities from the focus of expansion in the flow field and was strongly modulated by changes in the motion angle between the self-motion and object motion components in the retinal motion of the moving object, the speeds of these components and the density of the flow field. Lastly, flow parsing was not affected by illusory changes in the perceived direction of self-motion. In conclusion, visual information alone is not sufficient for accurate perception of scene-relative object motion during self-motion. Furthermore, flow parsing takes the 3D position of the moving object in the scene into account and is not a uniform global subtraction process. 8e observed tuning characteristics are different from those of local perceived motion interactions, providing evidence that flow parsing is a separate process from these local motion interactions. Finally, flow parsing does not depend on a prior percept of self-motion direction and instead directly uses the input retinal motion to construct percepts of scene-relative object motion during self-motion.
published_or_final_version
Psychology
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
10

Gehl, Gregory E. "Assessing motion induced interruptions using a motion platform." Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37631.

Full text
Abstract:
Approved for public release; distribution is unlimited
Human performance contributes to total system performance. As human performance decreases, total system performance decreases while lifecycle costs increase. In a fiscally constrained environment, Human Systems Integration (HSI) seeks to assure human performance to reduce operating costs. This thesis seeks to develop a model for ship design in relation to Motion Induced Interruptions (MII). The model is based on the premise that MIIs affect specific domains of HSI in an adverse way. Future ship design considerations that mitigate MII occurrences can save the Navy money spent on human injury and system degradation. The thesis begins with an historical overview of MII theory and development and its interactions with domains of HSI. A MII prediction model was developed using data acquired from an experiment using a motion-based platform that emulates ship motion. Quantitative data were analyzed from 21 subjects who underwent 32 trials. Multiple regression analysis consisted of two independent variables as period and lateral acceleration and the response variable as a MII incident. Logistic regression considered two more independent variables that addressed individual differences. Data analysis revealed that acceleration, period, and human balance were statistically significant. The proposed multiple regression model accounted for 77% of the variance of MII forecasting. This thesis lays the foundation for future quantitative analysis of interactions between MIIs and accelerations or periods in different axes. Additionally, it provides an initial model that predicts conditions of high MII incident environments that can ultimately lead to a viable design tool for HSI practitioners and ship designers.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Motion"

1

1950-, Quick Dave, ed. Motion motion kinetic art. Salt Lake City: Peregrine Smith Books, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bernards, Alexandra Tatjana. Acinetobacter: Non-motile, but in motion. [Leiden: University of Leiden, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sauvain, Philip Arthur. Motion. New York: New Discovery Books, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lauw, Darlene. Motion. St. Catharines, Ont: Crabtree Pub., 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bone, Dan. Motion. Toronto: GTK Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1959-, Lawson Jennifer E., ed. Motion. Winnipeg: Peguis Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Manolis, Kay. Motion. Minneapolis, MN: Bellwether Media, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jem, Schofield, and Boykin Brendan, eds. Motion. Berkeley: Peachpit Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Segal, Zef, and Bram Vannieuwenhuyze, eds. Motion in Maps, Maps in Motion. NL Amsterdam: Amsterdam University Press, 2020. http://dx.doi.org/10.5117/9789463721103.

Full text
Abstract:
Motion in Maps, Maps in Motion argues that the mapping of stories, movement, and change should not be understood as an innovation of contemporary cartography, but rather as an important aspect of human cartography with a longer history than might be assumed. The authors in this collection reflect upon the main characteristics and evolutions of story and motion mapping, from the figurative news and history maps that were mass-produced in early modern Europe, through the nineteenth- and twentieth-century flow maps that appeared in various atlases, up to the digital and interactive motion and personalized maps that are created today. Rather than presenting a clear and homogeneous history from the past up until the present, this book offers a toolbox for understanding and interpreting the complex interplays and links between narrative, motion, and maps.
APA, Harvard, Vancouver, ISO, and other styles
10

Herr, David F. Motion practice. 2nd ed. Boston: Little, Brown, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Motion"

1

Yao, Fulai, and Yaming Yao. "Most Commonly Used Actuator–Motor." In Efficient Energy-Saving Control and Optimization for Multi-Unit Systems, 83–114. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4492-3_6.

Full text
Abstract:
AbstractIn energy-saving control, a large number of electric motors are used for load distribution and devices switching. A lot of rotary motions, linear motion or other forms of mechanical motion are mostly driven by motors. It can be said that motors are the most commonly used actuators in the field of electrical engineering and automation. After being connected to a suitable power supply, the motor generates rotary motion, and the linear motor produce linear motion.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Yi-Tong, and Rama Chellappa. "Motion Stereo—Lateral Motion." In Artificial Neural Networks for Computer Vision, 44–62. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-2834-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Yi-Tong, and Rama Chellappa. "Motion Stereo—Longitudinal Motion." In Artificial Neural Networks for Computer Vision, 63–82. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-2834-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Silver, Brian L. "Motion." In The Physical Chemistry of MEMBRANES, 231–47. Dordrecht: Springer Netherlands, 1985. http://dx.doi.org/10.1007/978-94-010-9628-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wellner, Marcel. "Motion." In Elements of Physics, 13–44. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-3860-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Keighley, H. J. P., F. R. McKim, A. Clark, and M. J. Harrison. "Motion." In Mastering Physics, 28–43. London: Macmillan Education UK, 1986. http://dx.doi.org/10.1007/978-1-349-86062-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Elazar, Michael. "Motion." In Honoré Fabri and the Concept of Impetus: A Bridge between Paradigms, 17–29. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1605-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Keighley, H. J. P., F. R. McKim, A. Clark, and M. J. Harrison. "Motion." In Mastering Physics, 28–43. London: Macmillan Education UK, 1986. http://dx.doi.org/10.1007/978-1-349-08849-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jähne, Bernd. "Motion." In Digital Image Processing, 253–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-662-11565-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Baldwin, Dennis, Jamie Macdonald, Keith Peters, Jon Steer, David Tudury, Jerome Turner, Steve Webster, Alex White, and Todd Yard. "Motion." In Flash MX Studio, 7–39. Berkeley, CA: Apress, 2002. http://dx.doi.org/10.1007/978-1-4302-5166-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Motion"

1

"Proceedings Workshop on Motion and Video Computing (MOTION 2002)." In Proceedings Workshop on Motion and Video Computing (MOTION 2002). IEEE, 2002. http://dx.doi.org/10.1109/motion.2002.1182205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Author index." In Proceedings Workshop on Motion and Video Computing (MOTION 2002). IEEE, 2002. http://dx.doi.org/10.1109/motion.2002.1182246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Salzmann, Tim, Marco Pavone, and Markus Ryll. "Motron: Multimodal Probabilistic Human Motion Forecasting." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Molla, Eray, and Ronan Boulic. "Singularity Free Parametrization of Human Limbs." In Motion. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2522628.2522649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shoulson, Alexander, Max L. Gilbert, Mubbasir Kapadia, and Norman I. Badler. "An Event-Centric Planning Approach for Dynamic Real-Time Narrative." In Motion. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2522628.2522629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Normoyle, Aline, Jeremy B. Badler, Teresa Fan, Norman I. Badler, Vinicius J. Cassol, and Soraia R. Musse. "Evaluating perceived trust from procedurally animated gaze." In Motion. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2522628.2522630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jones, Ben, Jovan Popovic, James McCann, Wilmot Li, and Adam Bargteil. "Dynamic Sprites." In Motion. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2522628.2522631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ennis, Cathy, and Arjan Egges. "Perception of Approach and Reach in Combined Interaction Tasks." In Motion. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2522628.2522632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ennis, Cathy, Ludovic Hoyet, Arjan Egges, and Rachel McDonnell. "Emotion Capture." In Motion. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2522628.2522633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gerszewski, Dan, Ladislav Kavan, Peter-Pike Sloan, and Adam W. Bargteil. "Enhancements to Model-reduced Fluid Simulation." In Motion. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2522628.2522634.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Motion"

1

Doerry, Armin W. Smoothing Motion Estimates for Radar Motion Compensation. Office of Scientific and Technical Information (OSTI), July 2017. http://dx.doi.org/10.2172/1369525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moore, Madeline, and Audrean Jurgens. Linear Motion. Ames: Iowa State University, Digital Repository, November 2015. http://dx.doi.org/10.31274/itaa_proceedings-180814-1196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Boldovici, John A. Simulator Motion. Fort Belvoir, VA: Defense Technical Information Center, September 1992. http://dx.doi.org/10.21236/ada257683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Goulet, Christine, Yousef Bozorgnia, Norman Abrahamson, Nicolas Kuehn, Linda Al Atik, Robert Youngs, Robert Graves, and Gail Atkinson. Central and Eastern North America Ground-Motion Characterization - NGA-East Final Report. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, December 2018. http://dx.doi.org/10.55461/wdwr4082.

Full text
Abstract:
This document is the final project report of the Next Generation Attenuation for Central and Eastern North America (CENA) project (NGA-East). The NGA-East objective was to develop a new ground-motion characterization (GMC) model for the CENA region. The GMC model consists of a set of new ground-motion models (GMMs) for median and standard deviation of ground motions and their associated weights to be used with logic-trees in probabilistic seismic hazard analyses (PSHA). NGA-East is a large multidisciplinary project coordinated by the Pacific Earthquake Engineering Research Center (PEER), at the University of California. The project has two components: (1) a set of scientific research tasks, and (2) a model-building component following the framework of the “Seismic Senior Hazard Analysis Committee (SSHAC) Level 3” (Budnitz et al. 1997; NRC 2012). Component (2) is built on the scientific results of component (1) of the NGA-East project. This report documents the tasks under component (2) of the project. Under component (1) of NGA-East, several scientific issues were addressed, including: (a) development of a new database of ground motion data recorded in CENA; (b) development of a regionalized ground-motion map for CENA, (c) definition of the reference site condition; (d) simulations of ground motions based on different methodologies; and (e) development of numerous GMMs for CENA. The scientific tasks of NGA-East were all documented as a series of PEER reports. The scope of component (2) of NGA-East was to develop the complete GMC. This component was designed as a SSHAC Level 3 study with the goal of capturing the ground motions’ center, body, and range of the technically defensible interpretations in light of the available data and models. The SSHAC process involves four key tasks: evaluation, integration, formal review by the Participatory Peer Review Panel (PPRP), and documentation (this report). Key tasks documented in this report include review and evaluation of the empirical ground- motion database, the regionalization of ground motions, and screening sets of candidate GMMs. These are followed by the development of new median and standard deviation GMMs, the development of new analyses tools for quantifying the epistemic uncertainty in ground motions, and the documentation of implementation guidelines of the complete GMC for PSHA computations. Appendices include further documentation of the relevant SSHAC process and additional supporting technical documentation of numerous sensitivity analyses results. The PEER reports documenting component (1) of NGA-East are also considered “attachments” to the current report and are all available online on the PEER website (https://peer.berkeley.edu/). The final NGA-East GMC model includes a set of 17 GMMs defined for 24 ground-motion intensity measures, applicable to CENA in the moment magnitude range of 4.0 to 8.2 and covering distances up to 1500 km. Standard deviation models are also provided for site-specific analysis (single-station standard deviation) and for general PSHA applications (ergodic standard deviation). Adjustment factors are provided for consideration of source-depth effects and hanging-wall effects, as well as for hazard computations at sites in the Gulf Coast region.
APA, Harvard, Vancouver, ISO, and other styles
5

Abrahamson, Norman, Nicolas Kuehn, Zeynep Gulerce, Nicholas Gregor, Yousef Bozorgnia, Grace Parker, Jonathan Stewart, et al. Update of the BC Hydro Subduction Ground-Motion Model using the NGA- Subduction Dataset. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, June 2018. http://dx.doi.org/10.55461/oycd7434.

Full text
Abstract:
An update to the BCHydro ground-motion model for subduction earthquakes has been developed using the 2018 PEER NGA-SUB dataset. The selected subset includes over 70,000 recordings from 1880 earthquakes. The update modifies the BCHydro model to include regional terms for the VS30 scaling, large distance (linear R) scaling, and constant terms, which is consistent with the regionalization approach used in the NGA-W2 ground-motion models. A total of six regions were considered: Cascadia, Central America, Japan, New Zealand, South America, and Taiwan. Region- independent terms are used for the small-magnitude scaling, geometrical spreading, depth to top of rupture (ZTOR ) scaling, and slab/interface scaling. The break in the magnitude scaling at large magnitudes for slab earthquakes is based on thickness of the slab and is subduction-zone dependent. The magnitude scaling for large magnitudes is constrained based on finite-fault simulations as given in the 2016 BCHydro model. Nonlinear site response is also constrained to be the same as the 2016 BCHydro model. The sparse ground-motion data from Cascadia show a factor of 2–3 lower ground motions than for other regions. Without a sound physical basis for this large reduction, the Cascadia model is adjusted to be consistent with the average from all regions for the center range of the data: M = 6.5, R = 100 km, VS30 = 400 m/sec. Epistemic uncertainty is included using the scaled backbone approach, with high and low models based on the range of average ground motions for the different regions. For the Cascadia region, the ground-motion model is considered applicable to distance up to 1000 km, magnitudes of 5.0 to 9.5, and periods from 0 to 10 sec. The intended use of this update is to provide an improved ground-motion model for consideration for use in the development of updated U.S. national hazard maps. This update ground-motion model will be superseded by the NGA-SUB ground-motion model when they are completed.
APA, Harvard, Vancouver, ISO, and other styles
6

Sinha, Pawan. Pattern Motion Perception: Feature Tracking or Integration of Component Motions? Fort Belvoir, VA: Defense Technical Information Center, October 1994. http://dx.doi.org/10.21236/ada295653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reichert, Harris, and Vaze. PR-185-0351-R04 Welding Processes for Small to Medium Diameter Pipelines - Improved Root Pass Techniques. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), April 2006. http://dx.doi.org/10.55274/r0011070.

Full text
Abstract:
The purpose of this portion of the project was to develop improved root pass automation (Task 3) and process control systems (Task 4) for girth welding of pipeline butt joints. To achieve these goals a motion control module was developed for control of a track-mounted Serimer-Dasa welding tractor (bug). The module includes a software program that can communicate with a motion controller and control all motions of the axes.
APA, Harvard, Vancouver, ISO, and other styles
8

Sperling, George. Visual Motion Perception. Fort Belvoir, VA: Defense Technical Information Center, January 1989. http://dx.doi.org/10.21236/ada210994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Turano, Kathleen A. Visual Motion Perception. Fort Belvoir, VA: Defense Technical Information Center, March 2000. http://dx.doi.org/10.21236/ada375117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Teng, Lee. Coupled Transverse Motion. Office of Scientific and Technical Information (OSTI), January 1989. http://dx.doi.org/10.2172/1151484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography