Journal articles on the topic 'Motion Estimation'

To see the other types of publications on this topic, follow the link: Motion Estimation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Motion Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Qin, Yongming, Makoto Kumon, and Tomonari Furukawa. "Estimation of a Human-Maneuvered Target Incorporating Human Intention." Sensors 21, no. 16 (August 6, 2021): 5316. http://dx.doi.org/10.3390/s21165316.

Full text
Abstract:
This paper presents a new approach for estimating the motion state of a target that is maneuvered by an unknown human from observations. To improve the estimation accuracy, the proposed approach associates the recurring motion behaviors with human intentions, and models the association as an intention-pattern model. The human intentions relate to labels of continuous states; the motion patterns characterize the change of continuous states. In the preprocessing, an Interacting Multiple Model (IMM) estimation technique is used to infer the intentions and extract motions, which eventually construct the intention-pattern model. Once the intention-pattern model has been constructed, the proposed approach incorporate the intention-pattern model to estimation using any state estimator including Kalman filter. The proposed approach not only estimates the mean using the human intention more accurately but also updates the covariance using the human intention more precisely. The performance of the proposed approach was investigated through the estimation of a human-maneuvered multirotor. The result of the application has first indicated the effectiveness of the proposed approach for constructing the intention-pattern model. The ability of the proposed approach in state estimation over the conventional technique without intention incorporation has then been demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Hee-Eun, and Jung-Il Jun. "Development of an Estimation Formula to Evaluate Mission Motion Suitability of Military Jackets." Applied Sciences 11, no. 19 (September 30, 2021): 9129. http://dx.doi.org/10.3390/app11199129.

Full text
Abstract:
We developed an estimation formula for mission motion suitability evaluation based on the general motion protocol to evaluate the motion suitability of a tracked vehicle crew jacket. Motion suitability evaluation was conducted for the 9 general motions and 12 mission motions among 27 tracked vehicle crew members who wore a tracked vehicle crew jacket. We conducted correlation and factor analyses on motions to extract the main mission motions, and a multiple regression analysis was performed on major mission motions using general motions as independent variables. As a result, two mission behavior factors related to ammunition stowing and boarding/entry were extracted. We selected ammunition stowing I and the boarding motion, which has the highest factor load in each factor and the highest explanatory power (R2) of the estimation formula. Regression equations for ammunition stowing consisting of five general motions (p < 0.001) and for boarding motion (p < 0.01) consisting of one general motion could be obtained. In conclusion, the estimation formula for mission motion suitability using general motion is beneficial for enhancing the effectiveness of the evaluation of military jackets for tracked vehicle crews.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Jiaman, C. Karen Liu, and Jiajun Wu. "Ego-Body Pose Estimation via Ego-Head Pose Estimation." AI Matters 9, no. 2 (June 2023): 20–23. http://dx.doi.org/10.1145/3609468.3609473.

Full text
Abstract:
Estimating 3D human motion from an ego-centric video, which records the environment viewed from the first-person perspective with a front-facing monocular camera, is critical to applications in VR/AR. However, naively learning a mapping between egocentric videos and full-body human motions is challenging for two reasons. First, modeling this complex relationship is difficult; unlike reconstruction motion from third-person videos, the human body is often out of view of an egocentric video. Second, learning this mapping requires a large-scale, diverse dataset containing paired egocentric videos and the corresponding 3D human poses. Creating such a dataset requires meticulous instrumentation for data acquisition, and unfortunately, such a dataset does not currently exist. As such, existing works have only worked on small-scale datasets with limited motion and scene diversity (yuan20183d; yuan2019ego; luo2021dynamics).
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Kaihong, Kumar Akash, and Teruhisa Misu. "Learning Temporally and Semantically Consistent Unpaired Video-to-Video Translation through Pseudo-Supervision from Synthetic Optical Flow." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2477–86. http://dx.doi.org/10.1609/aaai.v36i3.20148.

Full text
Abstract:
Unpaired video-to-video translation aims to translate videos between a source and a target domain without the need of paired training data, making it more feasible for real applications. Unfortunately, the translated videos generally suffer from temporal and semantic inconsistency. To address this, many existing works adopt spatiotemporal consistency constraints incorporating temporal information based on motion estimation. However, the inaccuracies in the estimation of motion deteriorate the quality of the guidance towards spatiotemporal consistency, which leads to unstable translation. In this work, we propose a novel paradigm that regularizes the spatiotemporal consistency by synthesizing motions in input videos with the generated optical flow instead of estimating them. Therefore, the synthetic motion can be applied in the regularization paradigm to keep motions consistent across domains without the risk of errors in motion estimation. Thereafter, we utilize our unsupervised recycle and unsupervised spatial loss, guided by the pseudo-supervision provided by the synthetic optical flow, to accurately enforce spatiotemporal consistency in both domains. Experiments show that our method is versatile in various scenarios and achieves state-of-the-art performance in generating temporally and semantically consistent videos. Code is available at: https://github.com/wangkaihong/Unsup_Recycle_GAN/.
APA, Harvard, Vancouver, ISO, and other styles
5

Schutten, R. J., A. Pelagotti, and G. De Haan. "Layered motion estimation." Philips Journal of Research 51, no. 2 (January 1998): 253–67. http://dx.doi.org/10.1016/s0165-5817(98)00010-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Popescu, Mihaela, Dennis Mronga, Ivan Bergonzani, Shivesh Kumar, and Frank Kirchner. "Experimental Investigations into Using Motion Capture State Feedback for Real-Time Control of a Humanoid Robot." Sensors 22, no. 24 (December 15, 2022): 9853. http://dx.doi.org/10.3390/s22249853.

Full text
Abstract:
Regardless of recent advances, humanoid robots still face significant difficulties in performing locomotion tasks. Among the key challenges that must be addressed to achieve robust bipedal locomotion are dynamically consistent motion planning, feedback control, and state estimation of such complex systems. In this paper, we investigate the use of an external motion capture system to provide state feedback to an online whole-body controller. We present experimental results with the humanoid robot RH5 performing two different whole-body motions: squatting with both feet in contact with the ground and balancing on one leg. We compare the execution of these motions using state feedback from (i) an external motion tracking system and (ii) an internal state estimator based on inertial measurement unit (IMU), forward kinematics, and contact sensing. It is shown that state-of-the-art motion capture systems can be successfully used in the high-frequency feedback control loop of humanoid robots, providing an alternative in cases where state estimation is not reliable.
APA, Harvard, Vancouver, ISO, and other styles
7

Phan, Gia-Hoang, Clint Hansen, Paolo Tommasino, Asif Hussain, Domenico Formica, and Domenico Campolo. "A Complementary Filter Design on SE(3) to Identify Micro-Motions during 3D Motion Tracking." Sensors 20, no. 20 (October 16, 2020): 5864. http://dx.doi.org/10.3390/s20205864.

Full text
Abstract:
In 3D motion capture, multiple methods have been developed in order to optimize the quality of the captured data. While certain technologies, such as inertial measurement units (IMU), are mostly suitable for 3D orientation estimation at relatively high frequencies, other technologies, such as marker-based motion capture, are more suitable for 3D position estimations at a lower frequency range. In this work, we introduce a complementary filter that complements 3D motion capture data with high-frequency acceleration signals from an IMU. While the local optimization reduces the error of the motion tracking, the additional accelerations can help to detect micro-motions that are useful when dealing with high-frequency human motions or robotic applications. The combination of high-frequency accelerometers improves the accuracy of the data and helps to overcome limitations in motion capture when micro-motions are not traceable with 3D motion tracking system. In our experimental evaluation, we demonstrate the improvements of the motion capture results during translational, rotational, and combined movements.
APA, Harvard, Vancouver, ISO, and other styles
8

Ozawa, Takehiro, Yusuke Sekikawa, and Hideo Saito. "Accuracy and Speed Improvement of Event Camera Motion Estimation Using a Bird’s-Eye View Transformation." Sensors 22, no. 3 (January 20, 2022): 773. http://dx.doi.org/10.3390/s22030773.

Full text
Abstract:
Event cameras are bio-inspired sensors that have a high dynamic range and temporal resolution. This property enables motion estimation from textures with repeating patterns, which is difficult to achieve with RGB cameras. Therefore, motion estimation of an event camera is expected to be applied to vehicle position estimation. An existing method, called contrast maximization, is one of the methods that can be used for event camera motion estimation by capturing road surfaces. However, contrast maximization tends to fall into a local solution when estimating three-dimensional motion, which makes correct estimation difficult. To solve this problem, we propose a method for motion estimation by optimizing contrast in the bird’s-eye view space. Instead of performing three-dimensional motion estimation, we reduced the dimensionality to two-dimensional motion estimation by transforming the event data to a bird’s-eye view using homography calculated from the event camera position. This transformation mitigates the problem of the loss function becoming non-convex, which occurs in conventional methods. As a quantitative experiment, we created event data by using a car simulator and evaluated our motion estimation method, showing an improvement in accuracy and speed. In addition, we conducted estimation from real event data and evaluated the results qualitatively, showing an improvement in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

Klomp, Sven, Marco Munderloh, and Jörn Ostermann. "Decoder-Side Motion Estimation Assuming Temporally or Spatially Constant Motion." ISRN Signal Processing 2011 (June 20, 2011): 1–10. http://dx.doi.org/10.5402/2011/956372.

Full text
Abstract:
In current video coding standards, the encoder exploits temporal redundancies within the video sequence by performing block-based motion compensated prediction. However, the motion estimation is only performed at the encoder, and the motion vectors have to be coded explicitly into the bit stream. Recent research has shown that the compression efficiency can be improved by also estimating the motion at the decoder. This paper gives a detailed description of a decoder-side motion estimation architecture which assumes temporal constant motion and compares the proposed motion compensation algorithm with an alternative interpolation method. The overall rate reduction for this approach is almost 8% compared to H.264/MPEG-4 Part 10 (AVC). Furthermore, an extensive comparison with the assumption of spatial constant motion, as used in decoder-side motion vector derivation, is given. A new combined approach of both algorithms is proposed that leads to 13% bit rate reduction on average.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Yi Wei, and Yung Lung Lee. "An Improved IMM Estimator Combined with Intelligent Input Estimation Technique for Tracking Maneuvering Target." Applied Mechanics and Materials 764-765 (May 2015): 664–70. http://dx.doi.org/10.4028/www.scientific.net/amm.764-765.664.

Full text
Abstract:
The estimation performance of interactive multiple model (IMM) estimator for tracking a maneuvering target is influenced by the target motion models and application of filters. An improved IMM estimation algorithm combined with the intelligent input estimation technique is proposed in this study. The target motion models include the constant velocity (CV) model and the modified Singer acceleration model. The intelligent fuzzy weighted input estimation (IFWIE) is used to compute the acceleration input for the modified Singer acceleration model besides the application of standard Kalman filter (KF). The combination of KF and IFWIE can estimate the target motion state precisely and the proposed method is compared with the common IMM estimator. The simulation results prove the improved IMM estimator has superior estimation performance than the common IMM estimator, especially when the target changes the acceleration violently. The utilization of IFWIE for the improved IMM estimator can estimate the acceleration input effectively.
APA, Harvard, Vancouver, ISO, and other styles
11

Wenjing Ma, Wenjing Ma, Jianguang Zhao Wenjing Ma, and Guangquan Zhu Jianguang Zhao. "Estimation on Human Motion Posture using Improved Deep Reinforcement Learning." 電腦學刊 34, no. 4 (August 2023): 097–110. http://dx.doi.org/10.53106/199115992023083404008.

Full text
Abstract:
<p>Estimating human motion posture can provide important data for intelligent monitoring systems, human-computer interaction, motion capture, and other fields. However, the traditional human motion posture estimation algorithm is difficult to achieve the goal of fast estimation of human motion posture. To address the problems of traditional algorithms, in the paper, we propose an estimation algorithm for human motion posture using improved deep reinforcement learning. First, the double deep Q network is constructed to improve the deep reinforcement learning algorithm. The improved deep reinforcement learning algorithm is used to locate the human motion posture coordinates and improve the effectiveness of bone point calibration. Second, the human motion posture analysis generative adversarial networks are constructed to realize the automatic recognition and analysis of human motion posture. Finally, using the preset human motion posture label, combined with the undirected graph model of the human, the human motion posture estimation is completed, and the precise estimation algorithm of the human motion posture is realized. Experiments are performed based on MPII Human Pose data set and HiEve data set. The results show that the proposed algorithm has higher positioning accuracy of joint nodes. The recognition effect of bone joint points is better, and the average is about 1.45%. The average posture accuracy is up to 98.2%, and the average joint point similarity is high. Therefore, it is proved that the proposed method has high application value in human-computer interaction, human motion capture and other fields.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Chauvet, Antoine, Yoshihiro Sugaya, Tomo Miyazaki, and Shinichiro Omachi. "Optical Flow-Based Fast Motion Parameters Estimation for Affine Motion Compensation." Applied Sciences 10, no. 2 (January 20, 2020): 729. http://dx.doi.org/10.3390/app10020729.

Full text
Abstract:
This study proposes a lightweight solution to estimate affine parameters in affine motion compensation. Most of the current approaches start with an initial approximation based on the standard motion estimation, which only estimates the translation parameters. From there, iterative methods are used to find the best parameters, but they require a significant amount of time. The proposed method aims to speed up the process in two ways, first, skip evaluating affine prediction when it is likely to bring no encoding efficiency benefit, and second, by estimating better initial values for the iteration process. We use the optical flow between the reference picture and the current picture to estimate quickly the best encoding mode and get a better initial estimation. We achieve a reduction in encoding time over the reference of half when compared to the state of the art, with a loss in efficiency below 1%.
APA, Harvard, Vancouver, ISO, and other styles
13

Yaacob, M. S., H. Jamaluddin, and K. C. Wong. "Fuzzy logic approach for estimation of longitudinal aircraft parameters." Aeronautical Journal 106, no. 1065 (November 2002): 585–94. http://dx.doi.org/10.1017/s0001924000011593.

Full text
Abstract:
AbstractThe use of rule-based fuzzy logic system for estimating the stability and control derivatives for the longitudinal aircraft motion is proposed. The capabilities of the fuzzy logic system in estimating both the short-period and the phugoid mode of motions are explored. The flight data used in the estimation process were generated using the three nonlinear longitudinal equation of motion for a small remotely piloted vehicle with all the aerodynamic coefficients obtained from the wind-tunnel tests. The preferred method of perturbation of the aircraft elevator for data collection is also highlighted. The stability and control derivatives are estimated as the change in the aerodynamic force or moment due to small variation in one of the motion or control variables about its nominal value when the rest of the variables are held constant at their respective nominal values. The changes in the aerodynamic force and moment are predicted using the fuzzy logic system. The results show that the fuzzy logic system has a good potential as alternative tools for parameter estimation from flight data. The proposed method does not require any guesses of the initial values of the flight parameters.
APA, Harvard, Vancouver, ISO, and other styles
14

Yaacob, M. S., H. Jamaluddin, and K. C. Wong. "Fuzzy logic approach for estimation of longitudinal aircraft parameters." Aeronautical Journal 106, no. 1065 (November 2002): 585–94. http://dx.doi.org/10.1017/s0001924000018248.

Full text
Abstract:
AbstractThe use of rule-based fuzzy logic system for estimating the stability and control derivatives for the longitudinal aircraft motion is proposed. The capabilities of the fuzzy logic system in estimating both the short-period and the phugoid mode of motions are explored. The flight data used in the estimation process were generated using the three nonlinear longitudinal equation of motion for a small remotely piloted vehicle with all the aerodynamic coefficients obtained from the wind-tunnel tests. The preferred method of perturbation of the aircraft elevator for data collection is also highlighted. The stability and control derivatives are estimated as the change in the aerodynamic force or moment due to small variation in one of the motion or control variables about its nominal value when the rest of the variables are held constant at their respective nominal values. The changes in the aerodynamic force and moment are predicted using the fuzzy logic system. The results show that the fuzzy logic system has a good potential as alternative tools for parameter estimation from flight data. The proposed method does not require any guesses of the initial values of the flight parameters.
APA, Harvard, Vancouver, ISO, and other styles
15

B.Chaitanya E, B. Chaitanya E., and A. Mohan E. A.Mohan E. "An EDDR Architecture for Motion Estimation Testing Applications." International Journal of Scientific Research 1, no. 7 (June 1, 2012): 73–75. http://dx.doi.org/10.15373/22778179/dec2012/29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bellers, E. B., and G. de Haan. "Advanced Motion Estimation and Motion Compensated Deinterlacing." SMPTE Journal 106, no. 11 (November 1997): 777–86. http://dx.doi.org/10.5594/j17643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hosur, P. I. "Motion adaptive search for fast motion estimation." IEEE Transactions on Consumer Electronics 49, no. 4 (November 2003): 1330–40. http://dx.doi.org/10.1109/tce.2003.1261237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gupta, Naresh C., and Laveen N. Kanal. "3-D motion estimation from motion field." Artificial Intelligence 78, no. 1-2 (October 1995): 45–86. http://dx.doi.org/10.1016/0004-3702(95)00031-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Deesomsuk, Teerachai, and Tospol Pinkaew. "Effectiveness of Vehicle Weight Estimation from Bridge Weigh-in-Motion." Advances in Civil Engineering 2009 (2009): 1–13. http://dx.doi.org/10.1155/2009/312034.

Full text
Abstract:
The effectiveness of vehicle weight estimations from bridge weigh-in-motion system is studied. The measured bending moments of the instrumented bridge under a passage of vehicle are numerically simulated and are used as the input for the vehicle weight estimations. Two weight estimation methods assuming constant magnitudes and time-varying magnitudes of vehicle axle loads are investigated. The appropriate number of bridge elements and sampling frequency are considered. The effectiveness in term of the estimation accuracy is evaluated and compared under various parameters of vehicle-bridge system. The effects of vehicle speed, vehicle configuration, vehicle weight and bridge surface roughness on the accuracy of the estimated vehicle weights are intensively investigated. Based on the obtained results, vehicle speed, surface roughness level and measurement error seem to have stronger effects on the weight estimation accuracy than other parameters. In general, both methods can provide quite accurate weight estimation of the vehicle. Comparing between them, although the weight estimation method assuming constant magnitudes of axle loads is faster, the method assuming time-varying magnitudes of axle loads can provide axle load histories and exhibits more accurate weight estimations of the vehicle for almost of the considered cases.
APA, Harvard, Vancouver, ISO, and other styles
20

Ou, Gwo-Bin, and Robert B. Herrmann. "Estimation Theory for Peak Ground Motion." Seismological Research Letters 61, no. 2 (April 1, 1990): 99–107. http://dx.doi.org/10.1785/gssrl.61.2.99.

Full text
Abstract:
Abstract The application of estimation theory for predicting peak ground motion is critically examined in order to be more precise in its application. Estimation theory relates peak ground motion to the duration and spectrum of the signal. Using vertical component data from the Eastern Canada Telemetered Network, at distance range of 100–1000 km, we find that a duration must be defined by the interval where the cumulative energy of the main signal increases linearly, here between 5% and 75% of the cumulative power. This duration, when used with the spectra within this window, adequately replicates observed peak motions. This duration used differs significantly from that used by Herrmann (1985) and Toro and McGuire (1987) beyond 500 km. The estimation theory is extended to estimate confidence limits on the peak motion. Finally, the relation between various spectral level estimators, linear, logarithmic, and RMS, is considered to point out the need for consistency in spectral level estimation using smooth models.
APA, Harvard, Vancouver, ISO, and other styles
21

R, Vinothkanna. "A Survey on Novel Estimation Approach of Motion Controllers for Self-Driving Cars." December 2020 2, no. 4 (January 13, 2021): 211–19. http://dx.doi.org/10.36548/jei.2020.4.003.

Full text
Abstract:
The motion planning framework is one of the challenging tasks in autonomous driving cars. During motion planning, predicting of trajectory is computed by Gaussian propagation. Recently, the localization uncertainty control will be estimating by Gaussian framework. This estimation suffers from real time constraint distribution for (Global Positioning System) GPS error. In this research article compared novel motion planning methods and concluding the suitable estimating algorithm depends on the two different real time traffic conditions. One is the realistic unusual traffic and complex target is another one. The real time platform is used to measure the several estimation methods for motion planning. Our research article is that comparing novel estimation methods in two different real time environments and an identifying better estimation method for that. Our suggesting idea is that the autonomous vehicle uncertainty control is estimating by modified version of action based coarse trajectory planning. Our suggesting framework permits the planner to avoid complex and unusual traffic (uncertainty condition) efficiently. Our proposed case studies offer to choose effectiveness framework for complex mode of surrounding environment.
APA, Harvard, Vancouver, ISO, and other styles
22

Weber, Daniel, Clemens Gühmann, and Thomas Seel. "RIANN—A Robust Neural Network Outperforms Attitude Estimation Filters." AI 2, no. 3 (September 17, 2021): 444–63. http://dx.doi.org/10.3390/ai2030028.

Full text
Abstract:
Inertial-sensor-based attitude estimation is a crucial technology in various applications, from human motion tracking to autonomous aerial and ground vehicles. Application scenarios differ in characteristics of the performed motion, presence of disturbances, and environmental conditions. Since state-of-the-art attitude estimators do not generalize well over these characteristics, their parameters must be tuned for the individual motion characteristics and circumstances. We propose RIANN, a ready-to-use, neural network-based, parameter-free, real-time-capable inertial attitude estimator, which generalizes well across different motion dynamics, environments, and sampling rates, without the need for application-specific adaptations. We gather six publicly available datasets of which we exploit two datasets for the method development and the training, and we use four datasets for evaluation of the trained estimator in three different test scenarios with varying practical relevance. Results show that RIANN outperforms state-of-the-art attitude estimation filters in the sense that it generalizes much better across a variety of motions and conditions in different applications, with different sensor hardware and different sampling frequencies. This is true even if the filters are tuned on each individual test dataset, whereas RIANN was trained on completely separate data and has never seen any of these test datasets. RIANN can be applied directly without adaptations or training and is therefore expected to enable plug-and-play solutions in numerous applications, especially when accuracy is crucial but no ground-truth data is available for tuning or when motion and disturbance characteristics are uncertain. We made RIANN publicly available.
APA, Harvard, Vancouver, ISO, and other styles
23

Orchard, Garrick, and Ralph Etienne-Cummings. "Bioinspired Visual Motion Estimation." Proceedings of the IEEE 102, no. 10 (October 2014): 1520–36. http://dx.doi.org/10.1109/jproc.2014.2346763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pérez-Rúa, Juan-Manuel, Tomas Crivelli, and Patrick Pérez. "Object-guided motion estimation." Computer Vision and Image Understanding 153 (December 2016): 88–99. http://dx.doi.org/10.1016/j.cviu.2016.05.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tian, Tina Yu, and Mubarak Shah. "Motion estimation and segmentation." Machine Vision and Applications 9, no. 1 (January 1996): 32–42. http://dx.doi.org/10.1007/bf01246637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Koc, U. V., and K. J. Ray Liu. "DCT-based motion estimation." IEEE Transactions on Image Processing 7, no. 7 (July 1998): 948–65. http://dx.doi.org/10.1109/83.701146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sworder, D. D., and J. E. Boyd. "Improved motion-mode estimation." IEEE Transactions on Aerospace and Electronic Systems 41, no. 3 (July 2005): 1052–56. http://dx.doi.org/10.1109/taes.2005.1541449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Choi, Sun-Young, Soo-Ik Chae, and Young Serk Shim. "Rate-optimised motion estimation." Electronics Letters 36, no. 14 (2000): 1196. http://dx.doi.org/10.1049/el:20000874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Tian, Tina Yu, and Mubarak Shah. "Motion estimation and segmentation." Machine Vision and Applications 9, no. 1 (July 1, 1996): 32–42. http://dx.doi.org/10.1007/s001380050026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ding, Jia-rui, Zhong-jian Li, and Jin-wen An. "Robust global motion estimation." Optoelectronics Letters 3, no. 4 (July 2007): 303–7. http://dx.doi.org/10.1007/s11801-007-6171-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Pan, Zhaoqing, Jianjun Lei, Yajuan Zhang, and Fu Lee Wang. "Adaptive Fractional-Pixel Motion Estimation Skipped Algorithm for Efficient HEVC Motion Estimation." ACM Transactions on Multimedia Computing, Communications, and Applications 14, no. 1 (January 16, 2018): 1–19. http://dx.doi.org/10.1145/3159170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Park, Chun-Su. "Level-set-based motion estimation algorithm for multiple reference frame motion estimation." Journal of Visual Communication and Image Representation 24, no. 8 (November 2013): 1269–75. http://dx.doi.org/10.1016/j.jvcir.2013.08.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Haiwen, Jin Chen, Zhuohuai Guan, Yaoming Li, Kai Cheng, and Zhihong Cui. "Stereovision-Based Ego-Motion Estimation for Combine Harvesters." Sensors 22, no. 17 (August 25, 2022): 6394. http://dx.doi.org/10.3390/s22176394.

Full text
Abstract:
Ego-motion estimation is a foundational capability for autonomous combine harvesters, supporting high-level functions such as navigation and harvesting. This paper presents a novel approach for estimating the motion of a combine harvester from a sequence of stereo images. The proposed method starts with tracking a set of 3D landmarks which are triangulated from stereo-matched features. Six Degree of Freedom (DoF) ego motion is obtained by minimizing the reprojection error of those landmarks on the current frame. Then, local bundle adjustment is performed to refine structure (i.e., landmark positions) and motion (i.e., keyframe poses) jointly in a sliding window. Both processes are encapsulated into a two-threaded architecture to achieve real-time performance. Our method utilizes a stereo camera, which enables estimation at true scale and easy startup of the system. Quantitative tests were performed on real agricultural scene data, comprising several different working paths, in terms of estimating accuracy and real-time performance. The experimental results demonstrated that our proposed perception system achieved favorable accuracy, outputting the pose at 10 Hz, which is sufficient for online ego-motion estimation for combine harvesters.
APA, Harvard, Vancouver, ISO, and other styles
34

Ma, Chao, Chun Jie Qiao, Yue Ke Wang, and Shen Zhao. "A New Method for Target Motion Analysis." Applied Mechanics and Materials 336-338 (July 2013): 2354–58. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.2354.

Full text
Abstract:
The paper proposes a new method for a targets trajectory, assumed to be linear and uniform, based on the observation of its speed and bearings. After introducing the new method based on assumed model, the paper analyzes the relative error of range estimation caused by speed and bearings estimations relative error; The results of target motion analysis (TMA) is optimized by linearizing the model and using kalman filter. Pond test shows that relative error of range estimation calculated by this method is less than 10-1.
APA, Harvard, Vancouver, ISO, and other styles
35

Lehtonen, Eero, Jarmo Teuho, Juho Koskinen, Mojtaba Jafari Tadi, Riku Klén, Reetta Siekkinen, Joaquin Rives Gambin, Tuija Vasankari, and Antti Saraste. "A Respiratory Motion Estimation Method Based on Inertial Measurement Units for Gated Positron Emission Tomography." Sensors 21, no. 12 (June 9, 2021): 3983. http://dx.doi.org/10.3390/s21123983.

Full text
Abstract:
We present a novel method for estimating respiratory motion using inertial measurement units (IMUs) based on microelectromechanical systems (MEMS) technology. As an application of the method we consider the amplitude gating of positron emission tomography (PET) imaging, and compare the method against a clinically used respiration motion estimation technique. The presented method can be used to detect respiratory cycles and estimate their lengths with state-of-the-art accuracy when compared to other IMU-based methods, and is the first based on commercial MEMS devices, which can estimate quantitatively both the magnitude and the phase of respiratory motion from the abdomen and chest regions. For the considered test group consisting of eight subjects with acute myocardial infarction, our method achieved the absolute breathing rate error per minute of 0.44 ± 0.23 1/min, and the absolute amplitude error of 0.24 ± 0.09 cm, when compared to the clinically used respiratory motion estimation technique. The presented method could be used to simplify the logistics related to respiratory motion estimation in PET imaging studies, and also to enable multi-position motion measurements for advanced organ motion estimation.
APA, Harvard, Vancouver, ISO, and other styles
36

Roda-Sales, Alba, Joaquín L. Sancho-Bru, and Margarita Vergara. "Studying kinematic linkage of finger joints: estimation of kinematics of distal interphalangeal joints during manipulation." PeerJ 10 (October 4, 2022): e14051. http://dx.doi.org/10.7717/peerj.14051.

Full text
Abstract:
The recording of hand kinematics during product manipulation is challenging, and certain degrees of freedom such as distal interphalangeal (DIP) joints are difficult to record owing to limitations of the motion capture systems used. DIP joint kinematics could be estimated by taking advantage of its kinematic linkage with proximal interphalangeal (PIP) and metacarpophalangeal joints. This work analyses this linkage both in free motion conditions and during the performance of 26 activities of daily living. We have studied the appropriateness of different types of linear regressions (several combinations of independent variables and constant coefficients) and sets of data (free motion and manipulation data) to obtain equations to estimate DIP joints kinematics both in free motion and manipulation conditions. Errors that arise when estimating DIP joint angles assuming linear relationships using the equations obtained both from free motion data and from manipulation data are compared for each activity of daily living performed. Estimation using manipulation condition equations implies a lower mean absolute error per task (from 5.87° to 13.67°) than using the free motion ones (from 9° to 17.87°), but it fails to provide accurate estimations when passive extension of DIP joints occurs while PIP is flexed. This work provides evidence showing that estimating DIP joint angles is only recommended when studying free motion or grasps where both joints are highly flexed and when using linear relationships that consider only PIP joint angles.
APA, Harvard, Vancouver, ISO, and other styles
37

Otsuka, Taku, and Yuko Yotsumoto. "Partially Separable Aspects of Spatial and Temporal Estimations in Virtual Navigation as Revealed by Adaptation." i-Perception 13, no. 1 (January 2022): 204166952210788. http://dx.doi.org/10.1177/20416695221078878.

Full text
Abstract:
Recent studies claim that estimating the magnitude of the spatial and temporal aspects of one's self-motion shows similar characteristics, suggesting shared processing mechanisms between these two dimensions. While the estimation of other magnitude dimensions, such as size, number, and duration, exhibits negative aftereffects after prolonged exposure to the stimulus, it remains to be elucidated whether this could occur similarly in the estimation of the distance travelled and time elapsed during one's self-motion. We sought to fill this gap by examining the effects of adaptation on distance and time estimation using a virtual navigation task. We found that a negative aftereffect occurred in the distance reproduction task after repeated exposure to self-motion with a fixed travel distance. No such aftereffect occurred in the time reproduction task after repeated exposure to self-motion with a fixed elapsed time. Further, the aftereffect in distance reproduction occurred only when the distance of the adapting stimulus was fixed, suggesting that it did not reflect adaptation to time, which varied with distance. The estimation of spatial and temporal aspects of self-motion is thus processed by partially separable mechanisms, with the distance estimation being similar to the estimation of other magnitude dimensions.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Jijun, Lishuang Liu, Zhongcheng Wei, Chunhua Zhang, Wei Wang, and Yongjian Fan. "R-DEHM: CSI-Based Robust Duration Estimation of Human Motion with WiFi." Sensors 19, no. 6 (March 22, 2019): 1421. http://dx.doi.org/10.3390/s19061421.

Full text
Abstract:
As wireless sensing has developed, wireless behavior recognition has become a promising research area, in which human motion duration is one of the basic and significant parameters to measure human behavior. At present, however, there is no consideration of the duration estimation of human motion leveraging wireless signals. In this paper, we propose a novel system for robust duration estimation of human motion (R-DEHM) with WiFi in the area of interest. To achieve this, we first collect channel statement information (CSI) measurements on commodity WiFi devices and extract robust features from the CSI amplitude. Then, the back propagation neural network (BPNN) algorithm is introduced for detection by seeking a cutting line of the features for different states, i.e., moving human presence and absence. Instead of directly estimating the duration of human motion, we transform the complex and continuous duration estimation problem into a simple and discrete human motion detection by segmenting the CSI sequences. Furthermore, R-DEHM is implemented and evaluated in detail. The results of our experiments show that R-DEHM achieves the human motion detection and duration estimation with the average detection rate for human motion more than 94% and the average error rate for duration estimation less than 8%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Qinghui, Junqiu Li, Zhenping Qiang, and Libo He. "A Preprocess Method of External Disturbance Suppression for Carotid Wall Motion Estimation Using Local Phase and Orientation of B-Mode Ultrasound Sequences." BioMed Research International 2019 (November 21, 2019): 1–15. http://dx.doi.org/10.1155/2019/6547982.

Full text
Abstract:
Estimating the motions of the common carotid artery wall plays a very important role in early diagnosis of the carotid atherosclerotic disease. However, the disturbances caused by either the instability of the probe operator or the breathing of subjects degrade the estimation accuracy of arterial wall motion when performing speckle tracking on the B-mode ultrasound images. In this paper, we propose a global registration method to suppress external disturbances before motion estimation. The local vector images, transformed from B-mode images, were used for registration. To take advantage of both the structural information from the local phase and the geometric information from the local orientation, we proposed a confidence coefficient to combine them two. Furthermore, we altered the speckle reducing anisotropic diffusion filter to improve the performance of disturbance suppression. We compared this method with schemes of extracting wall displacement directly from B-mode or phase images. The results show that this scheme can effectively suppress the disturbances and significantly improve the estimation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
40

Hu, Juqi, Subhash Rakheja, and Youmin Zhang. "Tire–road friction coefficient estimation based on designed braking pressure pulse." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, no. 7 (January 5, 2021): 1876–91. http://dx.doi.org/10.1177/0954407020983580.

Full text
Abstract:
Knowledge of tire–road friction coefficient (TRFC) is valuable for autonomous vehicle control and design of active safety systems. This paper investigates TRFC estimation on the basis of longitudinal vehicle dynamics. A two-stage TRFC estimation scheme is proposed that limits the disturbances to the vehicle motion. A sequence of braking pressure pulses is designed in the first stage to identify desired minimal pulse pressure for reliable estimation of TRFC with minimal interference with the vehicle motion. This stage also provides a qualitative estimate of TRFC. In the second stage, tire normal force and slip ratio are directly calculated from the measured signals, a modified force observer based on the wheel rotational dynamics is developed for estimating the tire braking force. A constrained unscented Kalman filter (CUKF) algorithm is subsequently proposed to identify the TRFC for achieving rapid convergence and enhanced estimation accuracy. The effectiveness of the proposed methodology is evaluated through CarSim™-MATLAB/Simulink™ co-simulations considering vehicle motions on high-, medium-, and low-friction roads at different speeds. The results suggest that the proposed two-stage methodology can yield an accurate estimation of the road friction with a relatively lower effect on the vehicle speed.
APA, Harvard, Vancouver, ISO, and other styles
41

Tai, Shen-Chen. "Fast motion estimation algorithm using motion adaptive search." Optical Engineering 47, no. 3 (March 1, 2008): 037007. http://dx.doi.org/10.1117/1.2899019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Su, J. K., and R. M. Mersereau. "Motion estimation methods for overlapped block motion compensation." IEEE Transactions on Image Processing 9, no. 9 (2000): 1509–21. http://dx.doi.org/10.1109/83.862628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Choi, K. S., and S. J. Ko. "Hierarchical motion estimation algorithm using reliable motion adoption." Electronics Letters 46, no. 12 (2010): 835. http://dx.doi.org/10.1049/el.2010.0889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Fejes, Sándor, and Larry S. Davis. "Detection of Independent Motion Using Directional Motion Estimation." Computer Vision and Image Understanding 74, no. 2 (May 1999): 101–20. http://dx.doi.org/10.1006/cviu.1999.0751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Karunakar, A. K., and M. M. Manohara Pai. "Motion-compensated temporal filtering with optimized motion estimation." Journal of Real-Time Image Processing 4, no. 4 (September 1, 2009): 329–38. http://dx.doi.org/10.1007/s11554-009-0129-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Choi, Young-Ju, Dong-San Jun, Won-Sik Cheong, and Byung-Gyu Kim. "Design of Efficient Perspective Affine Motion Estimation/Compensation for Versatile Video Coding (VVC) Standard." Electronics 8, no. 9 (September 5, 2019): 993. http://dx.doi.org/10.3390/electronics8090993.

Full text
Abstract:
The fundamental motion model of the conventional block-based motion compensation in High Efficiency Video Coding (HEVC) is a translational motion model. However, in the real world, the motion of an object exists in the form of combining many kinds of motions. In Versatile Video Coding (VVC), a block-based 4-parameter and 6-parameter affine motion compensation (AMC) is being applied. In natural videos, in the majority of cases, a rigid object moves without any regularity rather than maintains the shape or transform with a certain rate. For this reason, the AMC still has a limit to compute complex motions. Therefore, more flexible motion model is desired for new video coding tool. In this paper, we design a perspective affine motion compensation (PAMC) method which can cope with more complex motions such as shear and shape distortion. The proposed PAMC utilizes perspective and affine motion model. The perspective motion model-based method uses four control point motion vectors (CPMVs) to give degree of freedom to all four corner vertices. Besides, the proposed algorithm is integrated into the AMC structure so that the existing affine mode and the proposed perspective mode can be executed adaptively. Because the block with the perspective motion model is a rectangle without specific feature, the proposed PAMC shows effective encoding performance for the test sequence containing irregular object distortions or dynamic rapid motions in particular. Our proposed algorithm is implemented on VTM 2.0. The experimental results show that the BD-rate reduction of the proposed technique can be achieved up to 0.45% and 0.30% on Y component for random access (RA) and low delay P (LDP) configurations, respectively.
APA, Harvard, Vancouver, ISO, and other styles
47

Alvarez, Juan, Diego Álvarez, and Antonio López. "Accelerometry-Based Distance Estimation for Ambulatory Human Motion Analysis." Sensors 18, no. 12 (December 15, 2018): 4441. http://dx.doi.org/10.3390/s18124441.

Full text
Abstract:
In human motion science, accelerometers are used as linear distance sensors by attaching them to moving body parts, with their measurement axes its measurement axis aligned in the direction of motion. When double integrating the raw sensor data, multiple error sources are also integrated integrated as well, producing inaccuracies in the final position estimation which increases fast with the integration time. In this paper, we make a systematic and experimental comparison of different methods for position estimation, with different sensors and in different motion conditions. The objective is to correlate practical factors that appear in real applications, such as motion mean velocity, path length, calibration method, or accelerometer noise level, with the quality of the estimation. The results confirm that it is possible to use accelerometers to estimate short linear displacements of the body with a typical error of around 4.5% in the general conditions tested in this study. However, they also show that the motion kinematic conditions can be a key factor in the performance of this estimation, as the dynamic response of the accelerometer can affect the final results. The study lays out the basis for a better design of distance estimations, which are useful in a wide range of ambulatory human motion monitoring applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Tian, Ling Tong, Zhi Pei Huang, Yi Sun, Lian Ying Ji, and Guan Hong Tao. "Orientation Estimation for Motion Capture Unit with Significant Motion Interference." Applied Mechanics and Materials 530-531 (February 2014): 155–59. http://dx.doi.org/10.4028/www.scientific.net/amm.530-531.155.

Full text
Abstract:
The orientation estimation is a critical technique in inertial sensor based motion capture systems. One challenge of the orientation estimation is that it suffers from the acceleration interference due to body segment motion, especially when the acceleration interference is significant. In this paper, we propose a quaternion based orientation estimation algorithm using unscented Kalman filter. In the algorithm, the acceleration interference is taken as an element of the state vector and estimated in the algorithm together with the orientation quaternion, knowing that the acceleration interference can be predicted based on the rotational angular velocity. The experiments were conducted using both computer simulation and in real-world motion scenarios. Both experimental results have shown the effectiveness of the proposed orientation estimation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
49

KRAMER, KATHLEEN A., and STEPHEN C. STUBBERUD. "ANALYSIS AND IMPLEMENTATION OF A NEURAL EXTENDED KALMAN FILTER FOR TARGET TRACKING." International Journal of Neural Systems 16, no. 01 (February 2006): 1–13. http://dx.doi.org/10.1142/s0129065706000457.

Full text
Abstract:
Having a better motion model in the state estimator is one way to improve target tracking performance. Since the motion model of the target is not known a priori, either robust modeling techniques or adaptive modeling techniques are required. The neural extended Kalman filter is a technique that learns unmodeled dynamics while performing state estimation in the feedback loop of a control system. This coupled system performs the standard estimation of the states of the plant while estimating a function to approximate the difference between the given state-coupling function model and the behavior of the true plant dynamics. At each sample step, this new model is added to the existing model to improve the state estimate. The neural extended Kalman filter has also been investigated as a target tracking estimation routine. Implementation issues for this adaptive modeling technique, including neural network training parameters, were investigated and an analysis was made of the quality of performance that the technique can have for tracking maneuvering targets.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Shengyang. "Motion estimation algorithm and architecture survey." Applied and Computational Engineering 6, no. 1 (June 14, 2023): 505–13. http://dx.doi.org/10.54254/2755-2721/6/20230847.

Full text
Abstract:
Motion estimation is a key part of video temporal characteristics analysis including video filter and compression. Increasing requirements for high-quality video image processing have confirmed the necessity of researching motion estimation. Current motion estimation methods, including high-efficient fast motion estimation algorithms and hardware architecture designs, achieved different performances from different perspectives. In this paper, we focus on analyzing typical fast algorithms and hardware architecture designs for motion estimation, comparing the used methods, and pointing out the basic ideas of motion estimation research schemes. In terms of theoretical implications, the research contributed to a more comprehensive reference for fast motion estimation algorithms and their hardware architecture designs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography