Academic literature on the topic 'Motion Estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Motion Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Motion Estimation"

1

Qin, Yongming, Makoto Kumon, and Tomonari Furukawa. "Estimation of a Human-Maneuvered Target Incorporating Human Intention." Sensors 21, no. 16 (August 6, 2021): 5316. http://dx.doi.org/10.3390/s21165316.

Full text
Abstract:
This paper presents a new approach for estimating the motion state of a target that is maneuvered by an unknown human from observations. To improve the estimation accuracy, the proposed approach associates the recurring motion behaviors with human intentions, and models the association as an intention-pattern model. The human intentions relate to labels of continuous states; the motion patterns characterize the change of continuous states. In the preprocessing, an Interacting Multiple Model (IMM) estimation technique is used to infer the intentions and extract motions, which eventually construct the intention-pattern model. Once the intention-pattern model has been constructed, the proposed approach incorporate the intention-pattern model to estimation using any state estimator including Kalman filter. The proposed approach not only estimates the mean using the human intention more accurately but also updates the covariance using the human intention more precisely. The performance of the proposed approach was investigated through the estimation of a human-maneuvered multirotor. The result of the application has first indicated the effectiveness of the proposed approach for constructing the intention-pattern model. The ability of the proposed approach in state estimation over the conventional technique without intention incorporation has then been demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Hee-Eun, and Jung-Il Jun. "Development of an Estimation Formula to Evaluate Mission Motion Suitability of Military Jackets." Applied Sciences 11, no. 19 (September 30, 2021): 9129. http://dx.doi.org/10.3390/app11199129.

Full text
Abstract:
We developed an estimation formula for mission motion suitability evaluation based on the general motion protocol to evaluate the motion suitability of a tracked vehicle crew jacket. Motion suitability evaluation was conducted for the 9 general motions and 12 mission motions among 27 tracked vehicle crew members who wore a tracked vehicle crew jacket. We conducted correlation and factor analyses on motions to extract the main mission motions, and a multiple regression analysis was performed on major mission motions using general motions as independent variables. As a result, two mission behavior factors related to ammunition stowing and boarding/entry were extracted. We selected ammunition stowing I and the boarding motion, which has the highest factor load in each factor and the highest explanatory power (R2) of the estimation formula. Regression equations for ammunition stowing consisting of five general motions (p < 0.001) and for boarding motion (p < 0.01) consisting of one general motion could be obtained. In conclusion, the estimation formula for mission motion suitability using general motion is beneficial for enhancing the effectiveness of the evaluation of military jackets for tracked vehicle crews.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Jiaman, C. Karen Liu, and Jiajun Wu. "Ego-Body Pose Estimation via Ego-Head Pose Estimation." AI Matters 9, no. 2 (June 2023): 20–23. http://dx.doi.org/10.1145/3609468.3609473.

Full text
Abstract:
Estimating 3D human motion from an ego-centric video, which records the environment viewed from the first-person perspective with a front-facing monocular camera, is critical to applications in VR/AR. However, naively learning a mapping between egocentric videos and full-body human motions is challenging for two reasons. First, modeling this complex relationship is difficult; unlike reconstruction motion from third-person videos, the human body is often out of view of an egocentric video. Second, learning this mapping requires a large-scale, diverse dataset containing paired egocentric videos and the corresponding 3D human poses. Creating such a dataset requires meticulous instrumentation for data acquisition, and unfortunately, such a dataset does not currently exist. As such, existing works have only worked on small-scale datasets with limited motion and scene diversity (yuan20183d; yuan2019ego; luo2021dynamics).
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Kaihong, Kumar Akash, and Teruhisa Misu. "Learning Temporally and Semantically Consistent Unpaired Video-to-Video Translation through Pseudo-Supervision from Synthetic Optical Flow." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2477–86. http://dx.doi.org/10.1609/aaai.v36i3.20148.

Full text
Abstract:
Unpaired video-to-video translation aims to translate videos between a source and a target domain without the need of paired training data, making it more feasible for real applications. Unfortunately, the translated videos generally suffer from temporal and semantic inconsistency. To address this, many existing works adopt spatiotemporal consistency constraints incorporating temporal information based on motion estimation. However, the inaccuracies in the estimation of motion deteriorate the quality of the guidance towards spatiotemporal consistency, which leads to unstable translation. In this work, we propose a novel paradigm that regularizes the spatiotemporal consistency by synthesizing motions in input videos with the generated optical flow instead of estimating them. Therefore, the synthetic motion can be applied in the regularization paradigm to keep motions consistent across domains without the risk of errors in motion estimation. Thereafter, we utilize our unsupervised recycle and unsupervised spatial loss, guided by the pseudo-supervision provided by the synthetic optical flow, to accurately enforce spatiotemporal consistency in both domains. Experiments show that our method is versatile in various scenarios and achieves state-of-the-art performance in generating temporally and semantically consistent videos. Code is available at: https://github.com/wangkaihong/Unsup_Recycle_GAN/.
APA, Harvard, Vancouver, ISO, and other styles
5

Schutten, R. J., A. Pelagotti, and G. De Haan. "Layered motion estimation." Philips Journal of Research 51, no. 2 (January 1998): 253–67. http://dx.doi.org/10.1016/s0165-5817(98)00010-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Popescu, Mihaela, Dennis Mronga, Ivan Bergonzani, Shivesh Kumar, and Frank Kirchner. "Experimental Investigations into Using Motion Capture State Feedback for Real-Time Control of a Humanoid Robot." Sensors 22, no. 24 (December 15, 2022): 9853. http://dx.doi.org/10.3390/s22249853.

Full text
Abstract:
Regardless of recent advances, humanoid robots still face significant difficulties in performing locomotion tasks. Among the key challenges that must be addressed to achieve robust bipedal locomotion are dynamically consistent motion planning, feedback control, and state estimation of such complex systems. In this paper, we investigate the use of an external motion capture system to provide state feedback to an online whole-body controller. We present experimental results with the humanoid robot RH5 performing two different whole-body motions: squatting with both feet in contact with the ground and balancing on one leg. We compare the execution of these motions using state feedback from (i) an external motion tracking system and (ii) an internal state estimator based on inertial measurement unit (IMU), forward kinematics, and contact sensing. It is shown that state-of-the-art motion capture systems can be successfully used in the high-frequency feedback control loop of humanoid robots, providing an alternative in cases where state estimation is not reliable.
APA, Harvard, Vancouver, ISO, and other styles
7

Phan, Gia-Hoang, Clint Hansen, Paolo Tommasino, Asif Hussain, Domenico Formica, and Domenico Campolo. "A Complementary Filter Design on SE(3) to Identify Micro-Motions during 3D Motion Tracking." Sensors 20, no. 20 (October 16, 2020): 5864. http://dx.doi.org/10.3390/s20205864.

Full text
Abstract:
In 3D motion capture, multiple methods have been developed in order to optimize the quality of the captured data. While certain technologies, such as inertial measurement units (IMU), are mostly suitable for 3D orientation estimation at relatively high frequencies, other technologies, such as marker-based motion capture, are more suitable for 3D position estimations at a lower frequency range. In this work, we introduce a complementary filter that complements 3D motion capture data with high-frequency acceleration signals from an IMU. While the local optimization reduces the error of the motion tracking, the additional accelerations can help to detect micro-motions that are useful when dealing with high-frequency human motions or robotic applications. The combination of high-frequency accelerometers improves the accuracy of the data and helps to overcome limitations in motion capture when micro-motions are not traceable with 3D motion tracking system. In our experimental evaluation, we demonstrate the improvements of the motion capture results during translational, rotational, and combined movements.
APA, Harvard, Vancouver, ISO, and other styles
8

Ozawa, Takehiro, Yusuke Sekikawa, and Hideo Saito. "Accuracy and Speed Improvement of Event Camera Motion Estimation Using a Bird’s-Eye View Transformation." Sensors 22, no. 3 (January 20, 2022): 773. http://dx.doi.org/10.3390/s22030773.

Full text
Abstract:
Event cameras are bio-inspired sensors that have a high dynamic range and temporal resolution. This property enables motion estimation from textures with repeating patterns, which is difficult to achieve with RGB cameras. Therefore, motion estimation of an event camera is expected to be applied to vehicle position estimation. An existing method, called contrast maximization, is one of the methods that can be used for event camera motion estimation by capturing road surfaces. However, contrast maximization tends to fall into a local solution when estimating three-dimensional motion, which makes correct estimation difficult. To solve this problem, we propose a method for motion estimation by optimizing contrast in the bird’s-eye view space. Instead of performing three-dimensional motion estimation, we reduced the dimensionality to two-dimensional motion estimation by transforming the event data to a bird’s-eye view using homography calculated from the event camera position. This transformation mitigates the problem of the loss function becoming non-convex, which occurs in conventional methods. As a quantitative experiment, we created event data by using a car simulator and evaluated our motion estimation method, showing an improvement in accuracy and speed. In addition, we conducted estimation from real event data and evaluated the results qualitatively, showing an improvement in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

Klomp, Sven, Marco Munderloh, and Jörn Ostermann. "Decoder-Side Motion Estimation Assuming Temporally or Spatially Constant Motion." ISRN Signal Processing 2011 (June 20, 2011): 1–10. http://dx.doi.org/10.5402/2011/956372.

Full text
Abstract:
In current video coding standards, the encoder exploits temporal redundancies within the video sequence by performing block-based motion compensated prediction. However, the motion estimation is only performed at the encoder, and the motion vectors have to be coded explicitly into the bit stream. Recent research has shown that the compression efficiency can be improved by also estimating the motion at the decoder. This paper gives a detailed description of a decoder-side motion estimation architecture which assumes temporal constant motion and compares the proposed motion compensation algorithm with an alternative interpolation method. The overall rate reduction for this approach is almost 8% compared to H.264/MPEG-4 Part 10 (AVC). Furthermore, an extensive comparison with the assumption of spatial constant motion, as used in decoder-side motion vector derivation, is given. A new combined approach of both algorithms is proposed that leads to 13% bit rate reduction on average.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Yi Wei, and Yung Lung Lee. "An Improved IMM Estimator Combined with Intelligent Input Estimation Technique for Tracking Maneuvering Target." Applied Mechanics and Materials 764-765 (May 2015): 664–70. http://dx.doi.org/10.4028/www.scientific.net/amm.764-765.664.

Full text
Abstract:
The estimation performance of interactive multiple model (IMM) estimator for tracking a maneuvering target is influenced by the target motion models and application of filters. An improved IMM estimation algorithm combined with the intelligent input estimation technique is proposed in this study. The target motion models include the constant velocity (CV) model and the modified Singer acceleration model. The intelligent fuzzy weighted input estimation (IFWIE) is used to compute the acceleration input for the modified Singer acceleration model besides the application of standard Kalman filter (KF). The combination of KF and IFWIE can estimate the target motion state precisely and the proposed method is compared with the common IMM estimator. The simulation results prove the improved IMM estimator has superior estimation performance than the common IMM estimator, especially when the target changes the acceleration violently. The utilization of IFWIE for the improved IMM estimator can estimate the acceleration input effectively.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Motion Estimation"

1

Cheng, Xin. "Feature-based motion estimation and motion segmentation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0016/MQ55493.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jaganathan, Venkata Krishnan. "Robust motion estimation techniques." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/6032.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 15, 2008) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
3

Rekleitis, Ioannis. "Visual motion estimation based on motion blur interpretation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ44103.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rekleitis, Ioannis. "Visual motion estimation based on motion blur interpretation." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20154.

Full text
Abstract:
When the relative velocity between the different objects in a scene and the camera is relative large---compared with the camera's exposure time---in the resulting image we have a distortion called motion blur. In the past, a lot of algorithms have been proposed for estimating the relative velocity from one or, most of the time, more images. The motion blur is generally considered an extra source of noise and is eliminated, or is assumed nonexistent. Unlike most of these approaches, it is feasible to estimate the Optical Flow map using only the information encoded in the motion blur. This thesis presents an algorithm that estimates the velocity vector of an image patch using the motion blur only, in two steps. The information used for the estimation of the velocity vectors is extracted from the frequency domain, and the most computationally expensive operation is the Fast Fourier Transform that transforms the image from the spatial to the frequency domain. Consequently, the complexity of the algorithm is bound by this operation into O( nlog(n)). The first step consists of using the response of a family of steerable filters applied on the log of the Power Spectrum in order to calculate the orientation of the velocity vector. The second step uses a technique called Cepstral Analysis. More precisely, the log power spectrum is treated as another signal and we examine the Inverse Fourier Transform of it in order to estimate the magnitude of the velocity vector. Experiments have been conducted on artificially blurred images and with real world data, and an error analysis on these results is also presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Ay, Emre. "Ego-Motion Estimation of Drones." Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210772.

Full text
Abstract:
To remove the dependency on external structure for drone positioning in GPS-denied environments, it is desirable to estimate the ego-motion of drones on-board. Visual positioning systems have been studied for quite some time and the literature on the area is diligent. The aim of this project is to investigate the currently available methods and implement a visual odometry system for drones which is capable of giving continuous estimates with a lightweight solution. In that manner, the state of the art systems are investigated and a visual odometry system is implemented based on the design decisions. The resulting system is shown to give acceptable estimates.
För att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Siu Fan. "General motion estimation and segmentation." Thesis, University of Surrey, 1990. http://epubs.surrey.ac.uk/843155/.

Full text
Abstract:
In this thesis, estimation of motion from an image sequence is investigated. The emphasis is on the novel use of motion model for describing two dimensional motion. Special attention is directed towards general motion models which are not restricted to translational motion. In contrast to translational motion, the 2-D motion is described by the model using motion parameters. There are two major areas which can benefit from the study of general motion model. The first one is image sequence processing and compression. In this context, the use of motion model provides a more compact description of the motion information because the model can be applied to a larger area. The second area is computer vision. The general motion parameters provide clues to the understanding of the environment. This offers a simpler alternative to techniques such as optical flow analysis. A direct approach is adopted here to estimate the motion parameters directly from an image sequence. This has the advantage of avoiding the error caused by the estimation of optical flow. A differential method has been developed for the purpose. This is applied in conjunction with a multi-resolution scheme. An initial estimate is obtained by applying the algorithm to a low resolution image. The initial estimate is then refined by applying the algorithm to image of higher resolutions. In this way, even severe motion can be estimated with high resolution. However, the algorithm is unable to cope with the situation of multiple moving objects, mainly because of the least square estimator used. A second algorithm, inspired by the Hough transform, is therefore developed to estimate the motion parameters of multiple objects. By formulating the problem as an optimization problem, the Hough transform is computed only implicitly. This drastically reduces the computational requirement as compared with the Hough transform. The criterion used in optimization is a measure of the degree of match between two images. It has been shown that the measure is a well behaving function in the vicinity of the motion parameter vectors describing the motion of the objects, depending on the smoothness of the images. Therefore, smoothing an image has the effect of allowing longer range motion to be estimated. Segmentation of the image according to motion is achieved at the same time. The ability to estimate general motion in the situation of multiple moving objects represents a major step forward in 2-D motion estimation. Finally, the application of motion compensation to the problem of frame rate conversion is considered. The handling of the covered and uncovered background has been investigated. A new algorithm to obtain a pixel value for the pixels in those areas is introduced. Unlike published algorithms, the background is not assumed stationary. This presents a major obstacle which requires the study of occlusion in the image. During the research, the art of motion estimation hcis been advanced from simple motion vector estimation to a more descriptive level: The ability to point out that a certain area in an image is undergoing a zooming operation is one example. Only low level information such as image gradient and intensity function is used. In many different situations, problems are caused by the lack of higher level information. This seems to suggest that general motion estimation is much more than using a general motion model and developing an algorithm to estimate the parameters. To advance further the state of the art of general motion estimation, it is believed that future research effort should focus on higher level aspects of motion understanding.
APA, Harvard, Vancouver, ISO, and other styles
7

Fakhouri, Elie Michel. "Variable block-size motion estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37260.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mok, Wai-hung Toby, and 莫偉雄. "Motion estimation in feature domain." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zayas-Cedeño, Gricelis 1974. "Motion estimation of cloud images." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50035.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (leaves 63-64).
by Gricelis Zayas-Cedeño.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
10

Weiss, Yair. "Bayesian motion estimation and segmentation." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9354.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1998.
Includes bibliographical references (leaves 195-204).
Estimating motion in scenes containing multiple moving objects remains a difficult problem in computer vision yet is solved effortlessly by humans. In this thesis we present a computational investigation of this astonishing performance in human vision. The method we use throughout is to formulate a small number of assumptions and see the extent to which the optimal interpretation given these assumptions corresponds to the human percept. For scenes containing a single motion we show that a wide range of previously published results are predicted by a Bayesian model that finds the most probable velocity field assuming that (1) images may be noisy and (2) velocity fields are likely to be slow and smooth. The predictions agree qualitatively, and are often in remarkable agreement quantitatively. For scenes containing multiple motions we introduce the notion of "smoothness in layers". The scene is assumed to be composed of a small number of surfaces or layers, and the motion of each layer is assumed to be slow and smooth. We again formalize these assumptions in a Bayesian framework and use the statistical technique of mixture estimation to find the predicted a surprisingly wide range of previously published results that are predicted with these simple assumptions. We discuss the shortcomings of these assumptions and show how additional assumptions can be incorporated into the same framework. Taken together, the first two parts of the thesis suggest that a seemingly complex set of illusions in human motion perception may arise from a single computational strategy that is optimal under reasonable assumptions.
(cont.) The third part of the thesis presents a computer vision algorithm that is based on the same assumptions. We compare the approach to recent developments in motion segmentation and illustrate its performance on real and synthetic image sequences.
by Yair Weiss.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Motion Estimation"

1

Goh, Wooi Boon. Image motion estimation. [s.l.]: typescript, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chakrabarti, Indrajit, Kota Naga Srinivasarao Batta, and Sumit Kumar Chatterjee. Motion Estimation for Video Coding. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14376-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ong, Ee Ping. Robust motion estimation and segmentation. Birmingham: University of Birmingham, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Furht, Borko, Joshua Greenberg, and Raymond Westwater. Motion Estimation Algorithms for Video Compression. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6241-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Furht, Borivoje. Motion estimation algorithms for video compression. Boston: Kluwer Academic Publishers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Furht, Borko. Motion estimation algorithms for video compression. Boston, Mass: Kluwer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Furht, Borivoje. Motion Estimation Algorithms for Video Compression. Boston, MA: Springer US, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Metkar, Shilpa, and Sanjay Talbar. Motion Estimation Techniques for Digital Video Coding. India: Springer India, 2013. http://dx.doi.org/10.1007/978-81-322-1097-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ildiz, Faith. Estimation of motion parameters from image sequences. Monterey, Calif: Naval Postgraduate School, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Metkar, Shilpa. Motion Estimation Techniques for Digital Video Coding. India: Springer India, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Motion Estimation"

1

Gigengack, Fabian, Xiaoyi Jiang, Mohammad Dawood, and Klaus P. Schäfers. "Motion Estimation." In SpringerBriefs in Electrical and Computer Engineering, 21–63. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-08392-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cuevas, Erik, Valentín Osuna, and Diego Oliva. "Motion Estimation." In Evolutionary Computation Techniques: A Comparative Perspective, 95–116. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-51109-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mitchell, Joan L., William B. Pennebaker, Chad E. Fogg, and Didier J. LeGall. "Motion estimation." In MPEG Video Compression Standard, 283–312. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4899-4587-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

De Oliveira, Jauvane C. "Motion Estimation." In Encyclopedia of Multimedia, 435–40. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-78414-4_115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mitchell, Joan L., William B. Pennebaker, Chad E. Fogg, and Didier J. LeGall. "Motion estimation." In MPEG Video Compression Standard, 283–312. New York, NY: Springer US, 1996. http://dx.doi.org/10.1007/0-306-46983-9_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Szeliski, Richard. "Motion Estimation." In Texts in Computer Science, 443–82. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-34372-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tschumperlé, David, Christophe Tilmant, and Vincent Barra. "Motion Estimation." In Digital Image Processing with C++, 183–208. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003323693-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Youn-Long Steve, Chao-Yang Kao, Huang-Chih Kuo, and Jian-Wen Chen. "Integer Motion Estimation." In VLSI Design for Video Coding, 31–55. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-1-4419-0959-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Youn-Long Steve, Chao-Yang Kao, Huang-Chih Kuo, and Jian-Wen Chen. "Fractional Motion Estimation." In VLSI Design for Video Coding, 57–72. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-1-4419-0959-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Szeliski, Richard. "Dense motion estimation." In Texts in Computer Science, 335–74. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84882-935-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Motion Estimation"

1

Rajala, S. A., I. M. Abdelqadar, G. L. Bilbro, and W. E. Snyder. "Motion estimation optimization." In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.226203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Boltz, Sylvain, and Frank Nielsen. "Randomized motion estimation." In 2010 17th IEEE International Conference on Image Processing (ICIP 2010). IEEE, 2010. http://dx.doi.org/10.1109/icip.2010.5652514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Illgner, Klaus, Werner Praefcke, and Frank Mueller. "Multiresolution motion estimation." In Electronic Imaging: Science & Technology, edited by Robert L. Stevenson and M. Ibrahim Sezan. SPIE, 1996. http://dx.doi.org/10.1117/12.234738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nalasani, Mahesh, W. David Pan, and Seong-Moo Yoo. "Motion estimation with integrated motion models." In the 42nd annual Southeast regional conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/986537.986653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu Jin, Liu Jin, Ai Xiaofeng Ai Xiaofeng, Zhao Feng Zhao Feng, and Wu Qihua Wu Qihua. "Motion estimation of micro-motion targets with translational motion." In IET International Radar Conference 2015. Institution of Engineering and Technology, 2015. http://dx.doi.org/10.1049/cp.2015.1435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lei, Meng, and Li Hang. "Motion Estimation Algorithm Based on Motion Characteristics." In 2009 WASE International Conference on Information Engineering (ICIE). IEEE, 2009. http://dx.doi.org/10.1109/icie.2009.40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baytas, Inci M., and Bilge Gunsel. "Head motion classification with 2D motion estimation." In 2014 22nd Signal Processing and Communications Applications Conference (SIU). IEEE, 2014. http://dx.doi.org/10.1109/siu.2014.6830231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Shijun, and Shawmin Lei. "Predictive motion estimation with global motion predictor." In Electronic Imaging 2004, edited by Sethuraman Panchanathan and Bhaskaran Vasudev. SPIE, 2004. http://dx.doi.org/10.1117/12.525114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Yaxiang, Xiaoping Fan, and Shaoqiang Liu. "Global motion estimation combining with motion segmentation." In Fifth International Conference on Digital Image Processing, edited by Yulin Wang and Xie Yi. SPIE, 2013. http://dx.doi.org/10.1117/12.2030893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lei, Liu, Wang Zhiliang, Liu Jiwei, and Cui Zhaohui. "Fast Global Motion Estimation." In Multimedia Technology (IC-BNMT). IEEE, 2009. http://dx.doi.org/10.1109/icbnmt.2009.5348470.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Motion Estimation"

1

Koc, Ut-Va, and K. J. Liu. DCT-Based Motion Estimation. Fort Belvoir, VA: Defense Technical Information Center, January 1995. http://dx.doi.org/10.21236/ada452980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gharavi, H., and H. Reza-Alikhani. Pel-recursive motion estimation algorithm. Gaithersburg, MD: National Institute of Standards and Technology, 2007. http://dx.doi.org/10.6028/nist.ir.6822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alon, Jonathan, and Stan Sclaroff. Recursive Estimation of Motion and Planer Structure. Fort Belvoir, VA: Defense Technical Information Center, March 2000. http://dx.doi.org/10.21236/ada451473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koc, Ut-Va, and K. J. Liu. Exact Subpixel Motion Estimation in DCT Domain. Fort Belvoir, VA: Defense Technical Information Center, January 1996. http://dx.doi.org/10.21236/ada455590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

McCallen, D., and S. Larsen. Nevada - A Simulation Environment for Regional Estimation of Ground Motion and Structural Response. Office of Scientific and Technical Information (OSTI), March 2003. http://dx.doi.org/10.2172/15004876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Turner, Daniel Z. An overview of the gradient-based local DIC formulation for motion estimation in DICe. Office of Scientific and Technical Information (OSTI), July 2016. http://dx.doi.org/10.2172/1561808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mojidra, Rushil, and Keri Ryan. Influence of Vertical Ground Motion on Bridges Isolated with Spherical Sliding Bearings. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, December 2019. http://dx.doi.org/10.55461/rynq3624.

Full text
Abstract:
The motivation for this project developed from testing of a full scale building isolated with triple friction pendulum bearings on the E-defense shake table in Japan. The test demonstrated experimentally that the vertical component of ground motion can amplify both the base shear and the story acceleration in the isolated building. Vertical shaking introduced high-frequency variation in the axial force of the bearings, and, consequently, a high-frequency component in the bearing lateral force, which excited higher structural modes in the building. Since vertical bridges are flexible in the vertical direction because of long spans, similar effects may be observed in bridges. The objectives of this study are to develop a physical understanding of the amplification of responses and develop a simplified method to predict amplification of base shear in three-dimensional (3D) shaking relative to two-dimensional (2D) shaking, for bridges isolated with spherical sliding bearings. A series of ground motions with a wide range of vertical shaking intensity were applied to 3D models of bridges isolated with triple pendulum bearings (TPBs), both excluding the vertical component (2D motion) and including the vertical component (3D motion). This enabled the comparison of the bridge response under 2D and 3D shaking such that the direct effect of vertical shaking could be investigated. The selected ground motions were fit to target spectra in the horizontal and vertical directions, and divided into three groups based on vertical peak ground acceleration (PGAV). Multi-span concrete box girder bridges were selected for this study, as they are a prominent bridge type in California, and are suitable for seismic isolation. Models were developed for a 3-span, 45-ft wide, multi-column Base Model bridge; various superstructure and isolation-system parameter variations were implemented to evaluate the effect of these variations on the amplification of base shear. Response histories were compared for a representative motion from each ground-motion group under 2D and 3D shaking. Modal and spectral analyses were conducted to understand dynamic properties and behavior of the bridge under vertical motion. Based on simplified theory, a method to estimate the amplification of base shear due to vertical shaking was developed. The accuracy of the simplified method was assessed through a base shear normalized error metric, and different amplification factors were considered. Response history analysis showed significant amplification of base shear under 3D motion implying that exclusion of vertical component could lead to under estimation of demand shear forces on bridge piers. Deck acceleration spectral response at different locations revealed that a transverse-vertical modal coupling response was present in the Base Model bridge, which led to amplification of deck accelerations in addition to base shear due to excitation of the superstructure transverse mode. The simplified method predicted that in addition to the peak vertical ground acceleration base shear amplification depended on the isolation-system period (radius of curvature) and friction coefficient. The error in the simplified method was approximately constant across the range of isolation-system parameters. Variations in the bridge superstructure or substructure modeling parameters had only a minor effect on the base shear since the deck acts as a single mass sliding on isolators; therefore, the simplified method can be applied to a range of bridge models. The simplified method includes an amplification factor that indirectly represents the dynamic amplification of vertical acceleration from the ground to the isolation system. An amplification factor of 1.0 was found to be sufficiently conservative to estimate the base shear due to 3D shaking. The lack of apparent dynamic amplification could mean that the peak vertical acceleration is out-of-phase with the base shear. The simplified method is more likely to be unconservative for high-intensity vertical ground motions due to the complexities associated with uplift and pounding. Further investigation is recommended to determine the threshold shaking intensity limit for the simplified method.
APA, Harvard, Vancouver, ISO, and other styles
8

Koschmann, Anthony, and Yi Qian. Latent Estimation of Piracy Quality and its Effect on Revenues and Distribution: The Case of Motion Pictures. Cambridge, MA: National Bureau of Economic Research, August 2020. http://dx.doi.org/10.3386/w27649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gray, A. L., K. E. Mattar, P. W. Vachon, R. Bindschadler, K. C. Jezek, R. Forster, and J P Crawford. InSAR Results from the RADARSAT Antarctic Mapping Mission Data: Estimation of Glacier Motion Using a Simple Registration Procedure. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1998. http://dx.doi.org/10.4095/219342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Soumekh, Mehrdad. Moving Target Detection and Motion Estimation in Foliage Using along Track Monopulse Synthetic Aperture Radar Imaging and Signal Subspace Processing of Uncalibrated MTD-SARs. Fort Belvoir, VA: Defense Technical Information Center, September 1997. http://dx.doi.org/10.21236/ada329234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography