Dissertations / Theses on the topic 'Motion Estimation'

To see the other types of publications on this topic, follow the link: Motion Estimation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Motion Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cheng, Xin. "Feature-based motion estimation and motion segmentation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0016/MQ55493.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jaganathan, Venkata Krishnan. "Robust motion estimation techniques." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/6032.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 15, 2008) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
3

Rekleitis, Ioannis. "Visual motion estimation based on motion blur interpretation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ44103.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rekleitis, Ioannis. "Visual motion estimation based on motion blur interpretation." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20154.

Full text
Abstract:
When the relative velocity between the different objects in a scene and the camera is relative large---compared with the camera's exposure time---in the resulting image we have a distortion called motion blur. In the past, a lot of algorithms have been proposed for estimating the relative velocity from one or, most of the time, more images. The motion blur is generally considered an extra source of noise and is eliminated, or is assumed nonexistent. Unlike most of these approaches, it is feasible to estimate the Optical Flow map using only the information encoded in the motion blur. This thesis presents an algorithm that estimates the velocity vector of an image patch using the motion blur only, in two steps. The information used for the estimation of the velocity vectors is extracted from the frequency domain, and the most computationally expensive operation is the Fast Fourier Transform that transforms the image from the spatial to the frequency domain. Consequently, the complexity of the algorithm is bound by this operation into O( nlog(n)). The first step consists of using the response of a family of steerable filters applied on the log of the Power Spectrum in order to calculate the orientation of the velocity vector. The second step uses a technique called Cepstral Analysis. More precisely, the log power spectrum is treated as another signal and we examine the Inverse Fourier Transform of it in order to estimate the magnitude of the velocity vector. Experiments have been conducted on artificially blurred images and with real world data, and an error analysis on these results is also presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Ay, Emre. "Ego-Motion Estimation of Drones." Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210772.

Full text
Abstract:
To remove the dependency on external structure for drone positioning in GPS-denied environments, it is desirable to estimate the ego-motion of drones on-board. Visual positioning systems have been studied for quite some time and the literature on the area is diligent. The aim of this project is to investigate the currently available methods and implement a visual odometry system for drones which is capable of giving continuous estimates with a lightweight solution. In that manner, the state of the art systems are investigated and a visual odometry system is implemented based on the design decisions. The resulting system is shown to give acceptable estimates.
För att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Siu Fan. "General motion estimation and segmentation." Thesis, University of Surrey, 1990. http://epubs.surrey.ac.uk/843155/.

Full text
Abstract:
In this thesis, estimation of motion from an image sequence is investigated. The emphasis is on the novel use of motion model for describing two dimensional motion. Special attention is directed towards general motion models which are not restricted to translational motion. In contrast to translational motion, the 2-D motion is described by the model using motion parameters. There are two major areas which can benefit from the study of general motion model. The first one is image sequence processing and compression. In this context, the use of motion model provides a more compact description of the motion information because the model can be applied to a larger area. The second area is computer vision. The general motion parameters provide clues to the understanding of the environment. This offers a simpler alternative to techniques such as optical flow analysis. A direct approach is adopted here to estimate the motion parameters directly from an image sequence. This has the advantage of avoiding the error caused by the estimation of optical flow. A differential method has been developed for the purpose. This is applied in conjunction with a multi-resolution scheme. An initial estimate is obtained by applying the algorithm to a low resolution image. The initial estimate is then refined by applying the algorithm to image of higher resolutions. In this way, even severe motion can be estimated with high resolution. However, the algorithm is unable to cope with the situation of multiple moving objects, mainly because of the least square estimator used. A second algorithm, inspired by the Hough transform, is therefore developed to estimate the motion parameters of multiple objects. By formulating the problem as an optimization problem, the Hough transform is computed only implicitly. This drastically reduces the computational requirement as compared with the Hough transform. The criterion used in optimization is a measure of the degree of match between two images. It has been shown that the measure is a well behaving function in the vicinity of the motion parameter vectors describing the motion of the objects, depending on the smoothness of the images. Therefore, smoothing an image has the effect of allowing longer range motion to be estimated. Segmentation of the image according to motion is achieved at the same time. The ability to estimate general motion in the situation of multiple moving objects represents a major step forward in 2-D motion estimation. Finally, the application of motion compensation to the problem of frame rate conversion is considered. The handling of the covered and uncovered background has been investigated. A new algorithm to obtain a pixel value for the pixels in those areas is introduced. Unlike published algorithms, the background is not assumed stationary. This presents a major obstacle which requires the study of occlusion in the image. During the research, the art of motion estimation hcis been advanced from simple motion vector estimation to a more descriptive level: The ability to point out that a certain area in an image is undergoing a zooming operation is one example. Only low level information such as image gradient and intensity function is used. In many different situations, problems are caused by the lack of higher level information. This seems to suggest that general motion estimation is much more than using a general motion model and developing an algorithm to estimate the parameters. To advance further the state of the art of general motion estimation, it is believed that future research effort should focus on higher level aspects of motion understanding.
APA, Harvard, Vancouver, ISO, and other styles
7

Fakhouri, Elie Michel. "Variable block-size motion estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37260.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mok, Wai-hung Toby, and 莫偉雄. "Motion estimation in feature domain." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zayas-Cedeño, Gricelis 1974. "Motion estimation of cloud images." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50035.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (leaves 63-64).
by Gricelis Zayas-Cedeño.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
10

Weiss, Yair. "Bayesian motion estimation and segmentation." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9354.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1998.
Includes bibliographical references (leaves 195-204).
Estimating motion in scenes containing multiple moving objects remains a difficult problem in computer vision yet is solved effortlessly by humans. In this thesis we present a computational investigation of this astonishing performance in human vision. The method we use throughout is to formulate a small number of assumptions and see the extent to which the optimal interpretation given these assumptions corresponds to the human percept. For scenes containing a single motion we show that a wide range of previously published results are predicted by a Bayesian model that finds the most probable velocity field assuming that (1) images may be noisy and (2) velocity fields are likely to be slow and smooth. The predictions agree qualitatively, and are often in remarkable agreement quantitatively. For scenes containing multiple motions we introduce the notion of "smoothness in layers". The scene is assumed to be composed of a small number of surfaces or layers, and the motion of each layer is assumed to be slow and smooth. We again formalize these assumptions in a Bayesian framework and use the statistical technique of mixture estimation to find the predicted a surprisingly wide range of previously published results that are predicted with these simple assumptions. We discuss the shortcomings of these assumptions and show how additional assumptions can be incorporated into the same framework. Taken together, the first two parts of the thesis suggest that a seemingly complex set of illusions in human motion perception may arise from a single computational strategy that is optimal under reasonable assumptions.
(cont.) The third part of the thesis presents a computer vision algorithm that is based on the same assumptions. We compare the approach to recent developments in motion segmentation and illustrate its performance on real and synthetic image sequences.
by Yair Weiss.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Magarey, Julian. "Motion estimation using complex wavelets." Thesis, University of Cambridge, 1997. https://www.repository.cam.ac.uk/handle/1810/273016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Xie, Liguang. "Realistic Motion Estimation Using Accelerometers." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/43368.

Full text
Abstract:
A challenging goal for both the game industry and the research community of computer graphics is the generation of 3D virtual avatars that automatically perform realistic human motions with high speed at low monetary cost. So far, full body motion estimation of human complexity remains an important open problem. We propose a realistic motion estimation framework to control the animation of 3D avatars. Instead of relying on a motion capture device as the control signal, we use low-cost and ubiquitously available 3D accelerometer sensors. The framework is developed in a data-driven fashion, which includes two phases: model learning from an existing high quality motion database, and motion synthesis from the control signal. In the phase of model learning, we built a high quality motion model of less complexity that learned from a large motion capture database. Then, by taking the 3D accelerometer sensor signal as input, we were able to synthesize high-quality motion from the motion model we learned. In this thesis, we present two different techniques for model learning and motion synthesis, respectively. Linear and nonlinear reduction techniques for data dimensionality are applied to search for the proper low dimensional representation of motion data. Two motion synthesis methods, interpolation and optimization, are compared using the 3D acceleration signals with high noise. We evaluate the result visually compared to the real video and quantitatively compared to the ground truth motion. The system performs well, which makes it available to a wide range of interactive applications, such as character control in 3D virtual environments and occupational training.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Dérian, Pierre. "Wavelets and Fluid Motion Estimation." Rennes 1, 2012. https://ecm.univ-rennes1.fr/nuxeo/site/esupversions/5cdcde8f-715b-4725-a6d4-4eea6474aead.

Full text
Abstract:
Ces travaux se situent dans la problématique d'élaboration d'outils de mesure adaptés aux caractéristiques des écoulements fluides. Le développement de l'imagerie digitale, associée à l'utilisation de techniques de visualisation d'écoulements en mécanique des fluides, permet d'envisager l'extraction, à l'aide de méthodes de vision par ordinateur, du mouvement d'écoulements perçu dans des séquences d'images. L'objectif consiste ici à proposer une nouvelle approche de type « flux optique » pour l'estimation multiéchelle de mouvements de fluides, en s'appuyant sur une représentation en ondelettes du mouvement recherché. Cette formulation en ondelettes introduit un formalisme multiéchelle, intéressant tant du point de vue de l'estimation du flux optique que de la représentation de champs de vitesse turbulents. Elle permet en outre la construction de bases à divergence nulle, respectant ainsi une contrainte issue de la physique des fluides. Plusieurs types de régularisation sont présentés; la plus simple procède par troncature de la base aux petites échelles, la plus complexe utilise les coefficients de connexion de la base d'ondelette pour construire des schémas d'ordre élevé. Les approches proposées sont évaluées sur des images synthétiques dans un premier temps, puis sur des images expérimentales d'écoulements caractéristiques. Les résultats obtenus sont comparés à ceux fournis par la méthode usuelle des « corrélations croisées », mettant en avant les intérêts et les limites de l'estimateur
This work falls within the general problematic of designing measurement tools adapted to the specificities of fluid flows. The development of digital imaging, combined with visualization techniques commonly employed in experimental fluid dynamics, enables to extract the apparent flow motion from image sequences, thanks to computer vision methods. The objective is to propose a novel “optical flow” algorithm dedicated to the multiscale motion estimation of fluid flows, using a wavelet representation of the unknown motion field. This wavelet formulation introduces a multiscale framework, conveniently adapted both to the optical flow estimation and to the representation of turbulent motion fields. It enables as well to design divergence-free bases, thereby respecting a constraint given by fluid dynamics. Several regularization schemes are proposed; the simplest consists in truncating the basis at fine scales, while the most complex builds high-order schemes from the connection coefficients of the wavelet basis. Proposed methods are evaluated on synthetic images in the first place, then on actual experimental images of characteristic fluid flows. Results are compared to those given by the usual “cross-correlations”, highlighting the advantages and limits of the wavelet-based estimator
APA, Harvard, Vancouver, ISO, and other styles
14

Fu, Ming Fai. "Motion estimation and compensation in wavelet domain and fast global motion estimation for video coding /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20FU.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 98-102). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
15

Reza-Alikhani, Hamid-Reza. "Motion compensation for image compression : pel-recursive motion estimation algorithm." Thesis, Loughborough University, 2002. https://dspace.lboro.ac.uk/2134/33721.

Full text
Abstract:
In motion pictures there is a certain amount of redundancy between consecutive frames. These redundancies can be exploited by using interframe prediction techniques. To further enhance the efficiency of interframe prediction, motion estimation and compensation, various motion compensation techniques can be used. There are two distinct techniques for motion estimation block matching and pel-recursive block matching has been widely used as it produces a better signal-to-noise ratio or a lower bit rate for transmission than the pel-recursive method. In this thesis, various pel-recursive motion estimation techniques such as steepest descent gradient algorithm have been considered and simulated.
APA, Harvard, Vancouver, ISO, and other styles
16

Balfour, Daniel Robert Malcolm. "Reduced-parameter motion estimation in PET using respiratory motion models." Thesis, King's College London (University of London), 2017. https://kclpure.kcl.ac.uk/portal/en/theses/reducedparameter-motion-estimation-in-pet-using-respiratory-motion-models(78899c82-9fb3-4fe7-b319-70c3be304a0e).html.

Full text
Abstract:
Positron emission tomography (PET) is a modality with high temporal resolution but long acquisition times. This can result in blurred images due to subject motion. Respiratory motion in particular is an unavoidable source of degradation, which can cause issues with quantification and clinical interpretation. An important characteristic of respiratory motion is its pseudo-cyclic nature, which has previously been exploited to form mathematical models which describe the motion, driven by a small number of parameters. The aim of this project is to use this form of motion modelling to estimate motion using information acquired from both dynamic magnetic resonance (MR) scans and from the acquired PET data itself to correct for the effects of motion. The use of motion models in this way can overcome the high levels of noise which otherwise characterise the estimation problem. First the feasibility of using motion models is investigated using synthetic data consisting of individual PET gates simulated using real motion information. The PET gates are registered using constraints provided by a motion model derived from MR images. A novelty of this approach is that this is the first time PET data have been used to indirectly drive a parameterised motion model. The next part of the project attempts to formalise the motion estimation process by incorporating the reduced-parameter motion model into the PET image reconstruction. An analytical gradient for a single motion parameter that drives the model is derived from the same objective function used to estimate the image. This results in significant noise averaging, providing robustness to the high level of noise typically found in PET data acquired over short time frames. This is shown to improve robustness to noise well enough that the number of gates can be increased and overall motion correction performance improved. Finally, the formulation is extended further to also model photon attenuation effects. This is shown to improve the performance of the algorithm when dealing with synthetic data that includes attenuation.
APA, Harvard, Vancouver, ISO, and other styles
17

Hemmendorff, Magnus. "Single and Multiple Motion Field Estimation." Licentiate thesis, Linköping University, Linköping University, Computer Vision, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54343.

Full text
Abstract:

This thesis presents a framework for estimation of motion fields both for single and multiple layers. All the methods have in common that they generate or use constraints on the local motion. Motion constraints are represented by vectors whose directions describe one component of the local motion and whose magnitude indicate confidence.

Two novel methods for estimating these motion constraints are presented. Both methods take two images as input and apply orientation sensitive quadrature filters. One method is similar to a gradient method applied on the phase from the complex filter outputs. The other method is based on novel results using canonical correlation presented in this thesis.

Parametric models, e.g. affine or FEM, are used to estimate motion from constraints on local motion. In order to estimate smooth fields for models with many parameters, cost functions on deformations are introduced.

Motions of transparent multiple layers are estimated by implicit or explicit clustering of motion constraints into groups. General issues and difficulties in analysis of multiple motions are described. An extension of the known EM algorithm is presented together with experimental results on multiple transparent layers with affine motions. Good accuracy in estimation allows reconstruction of layers using a backprojection algorithm. As an alternative to the EM algorithm, this thesis also introduces a method based on higher order tensors.

A result with potential applicatications in a number of diffeerent research fields is the extension of canonical correlation to handle complex variables. Correlation is maximized using a novel method that can handle singular covariance matrices.

APA, Harvard, Vancouver, ISO, and other styles
18

Lundgren, Pär. "Using Homographies for Vehicle Motion Estimation." Thesis, Linköpings universitet, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-120000.

Full text
Abstract:
This master’s thesis describes a way to represent vehicles when tracking them through an image sequence. Vehicles are described with a state containing their position, velocity, size, etc.. The thesis highlights the properties of homographies due to their suitability for estimation of projective transformations. The idea is to approximatively represent vehicles with planes based on feature points found on the vehicles. The purpose with this approach is to estimate the displacement of a vehicle by estimating the transformation of these planes. Thus, when avehicle is observed from behind, one plane approximates features found on the back and one plane approximates features found on the side, if the side of the vehicle is visible. The projective transformations of the planes are obtained by measuring the displacement of feature points. The approach presented in this thesis builds on the prerequisites that a camera placed on a vehicle provides an image of its field of view. It does not cover how to find vehicles in an image and thus it requires that the patch which contains the vehicle is provided. Even though this thesis covers large parts of image processing functionalities, the focus is on how to represent vehicles and how to design an appropriate filter for improving estimates of vehicle displacement. Due to noisy features points, approximation of planes, and estimated homographies, the obtained measurements are likely to be noisy. This requires a filter that can handle corrupt measurements and still use those that are not. An unscented Kalman filter, UKF, is utilized in this implementation. The UKF is an approximate solution to nonlinear filtering problems and is here used to update the vehicle’s states by using measurements obtained from homographies. The choice of the unscented Kalman filter was made because of its ease of implementation and its potentially good performance. The result is not a finished implementation for tracking of vehicles, but rather a first attempt for this approach. The result is not better than the existing approach, which might depend on one or several factors such as poorly estimated homographies, unreliable feature points and bad performance of the UKF.
APA, Harvard, Vancouver, ISO, and other styles
19

El-Hameid, Hossam Hassan Abd. "Motion parameter estimation for autonomous vehicles." Thesis, Cranfield University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Krause, Edward A. "Motion estimation for frame-rate conversion." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/58490.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science.
Videocartridge only available in MIT Institute Archives.
Bibliography: leaves 177-180.
by Edward A. Krause.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
21

Hedborg, Johan. "Motion and Structure Estimation From Video." Doctoral thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-76904.

Full text
Abstract:
Digital camera equipped cell phones were introduced in Japan in 2001, they quickly became popular and by 2003 outsold the entire stand-alone digital camera market. In 2010 sales passed one billion units and the market is still growing. Another trend is the rising popularity of smartphones which has led to a rapid development of the processing power on a phone, and many units sold today bear close resemblance to a personal computer. The combination of a powerful processor and a camera which is easily carried in your pocket, opens up a large eld of interesting computer vision applications. The core contribution of this thesis is the development of methods that allow an imaging device such as the cell phone camera to estimates its own motion and to capture the observed scene structure. One of the main focuses of this thesis is real-time performance, where a real-time constraint does not only result in shorter processing times, but also allows for user interaction. In computer vision, structure from motion refers to the process of estimating camera motion and 3D structure by exploring the motion in the image plane caused by the moving camera. This thesis presents several methods for estimating camera motion. Given the assumption that a set of images has known camera poses associated to them, we train a system to solve the camera pose very fast for a new image. For the cases where no a priory information is available a fast minimal case solver is developed. The solver uses ve points in two camera views to estimate the cameras relative position and orientation. This type of minimal case solver is usually used within a RANSAC framework. In order to increase accuracy and performance a renement to the random sampling strategy of RANSAC is proposed. It is shown that the new scheme doubles the performance for the ve point solver used on video data. For larger systems of cameras a new Bundle Adjustment method is developed which are able to handle video from cell phones. Demands for reduction in size, power consumption and price has led to a redesign of the image sensor. As a consequence the sensors have changed from a global shutter to a rolling shutter, where a rolling shutter image is acquired row by row. Classical structure from motion methods are modeled on the assumption of a global shutter and a rolling shutter can severely degrade their performance. One of the main contributions of this thesis is a new Bundle Adjustment method for cameras with a rolling shutter. The method accurately models the camera motion during image exposure with an interpolation scheme for both position and orientation. The developed methods are not restricted to cellphones only, but is rather applicable to any type of mobile platform that is equipped with cameras, such as a autonomous car or a robot. The domestic robot comes in many  avors, everything from vacuum cleaners to service and pet robots. A robot equipped with a camera that is capable of estimating its own motion while sensing its environment, like the human eye, can provide an eective means of navigation for the robot. Many of the presented methods are well suited of robots, where low latency and real-time constraints are crucial in order to allow them to interact with their environment.
Virtual Photo Set (VPS)
APA, Harvard, Vancouver, ISO, and other styles
22

Peacock, Andrew M. "Information fusion for improved motion estimation." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/428.

Full text
Abstract:
Motion Estimation is an important research field with many commercial applications including surveillance, navigation, robotics, and image compression. As a result, the field has received a great deal of attention and there exist a wide variety of Motion Estimation techniques which are often specialised for particular problems. The relative performance of these techniques, in terms of both accuracy and of computational requirements, is often found to be data dependent, and no single technique is known to outperform all others for all applications under all conditions. Information Fusion strategies seek to combine the results of different classifiers or sensors to give results of a better quality for a given problem than can be achieved by any single technique alone. Information Fusion has been shown to be of benefit to a number of applications including remote sensing, personal identity recognition, target detection, forecasting, and medical diagnosis. This thesis proposes and demonstrates that Information Fusion strategies may also be applied to combine the results of different Motion Estimation techniques in order to give more robust, more accurate and more timely motion estimates than are provided by any of the individual techniques alone. Information Fusion strategies for combining motion estimates are investigated and developed. Their usefulness is first demonstrated by combining scalar motion estimates of the frequency of rotation of spinning biological cells. Then the strategies are used to combine the results from three popular 2D Motion Estimation techniques, chosen to be representative of the main approaches in the field. Results are presented, from both real and synthetic test image sequences, which illustrate the potential benefits of Information Fusion to Motion Estimation applications. There is often a trade-off between accuracy of Motion Estimation techniques and their computational requirements. An architecture for Information Fusion that allows faster, less accurate techniques to be effectively combined with slower, more accurate techniques is described. This thesis describes a number of novel techniques for both Information Fusion and Motion Estimation which have potential scope beyond that examined here. The investigations presented in this thesis have also been reported in a number of workshop, conference and journal papers, which are listed at the end of the document.
APA, Harvard, Vancouver, ISO, and other styles
23

Cook, Daniel A. "Synthetic Aperture Sonar Motion Estimation and Compensation." Thesis, Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14538.

Full text
Abstract:
Synthetic aperture sonar (SAS) is the underwater acoustic counterpart to stripmap-mode synthetic aperture radar (SAR). Current seagoing SAS systems are deployed on unmanned robotic vechicles, commonly referred to as autonomous underwater vehicles (AUVs). As with SAR, SAS imaging is ideally done with a straight-line collection trajectory. However, SAS is far more susceptible to image degradation caused by the actual sensor trajectory deviating from a pefectly straight line. Unwanted motion is virtually unavoidable in the sea due to the influence of currents and wave action. In order to construct a perfectly-focused SAS image the motion must either be constrained to within one-eighth of a wavelength over the synthetic aperture, or it must be measured with the same degree of accuracy and then accounted for in the processing software. Since the former is not possible, the latter approach must be taken. The technique known as redundant phase centers (RPC) has proven to be insrumental in solving the problem of SAS motion compensation. In essence, RPC simply refers to the practice of overlapping a portion of the receiver array from one ping (transmission and reception) to the next. The signals observed by this overlapping portion will be identical except for a time shift proportional to the relative motion between pings. The time shifts observed by the RPC channels of the receiver array are scalars representing the projection of the array receiver locations onto the image slant plane, and these time shifts can be used to compensate for the unwanted platform motion. This thesis presents several extensions to the standard RPC technique in which the RPC time delays are combined with the AUV's on-board navigation data. The scalar time delays are decomposed into the components induced by the six degrees of freedom of the motion: i.e., the linear and angular velocities. Thus, the time delays observed in the image slant plane can be used to refine the motion estimate in an absolute frame of reference external to the AUV. Creating a high-resolution SAS image of the sea floor in an automatic fashion demands accurate and robust motion estimation. The performance of the motion estimation schemes presented is demonstrated using actual field data collected from an assortment of current research SAS systems.
APA, Harvard, Vancouver, ISO, and other styles
24

Sari, Huseyin. "Motion Estimation Using Complex Discrete Wavelet Transform." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1223205/index.pdf.

Full text
Abstract:
The estimation of optical flow has become a vital research field in image sequence analysis especially in past two decades, which found applications in many fields such as stereo optics, video compression, robotics and computer vision. In this thesis, the complex wavelet based algorithm for the estimation of optical flow developed by Magarey and Kingsbury is implemented and investigated. The algorithm is based on a complex version of the discrete wavelet transform (CDWT), which analyzes an image through blocks of filtering with a set of Gabor-like kernels with different scales and orientations. The output is a hierarchy of scaled and subsampled orientation-tuned subimages. The motion estimation algorithm is based on the relationship between translations in image domain and phase shifts in CDWT domain, which is satisfied by the shiftability and interpolability property of CDWT. Optical flow is estimated by using this relationship at each scale, in a coarse-to-fine (hierarchical) manner, where information from finer scales is used to refine the estimates from coarser scales. The performance of the motion estimation algorithm is investigated with various image sequences as input and the effects of the options in the algorithm like curvature-correction, interpolation kernel between levels and some parameter values like confidence threshold iv maximum number of CDWT levels and minimum finest level of detail are also experimented and discussed. The test results show that the method is superior to other well-known algorithms in estimation accuracy, especially under high illuminance variations and additive noise.
APA, Harvard, Vancouver, ISO, and other styles
25

Bretzner, Lars. "Multi-scale feature tracking and motion estimation." Doctoral thesis, KTH, Numerisk analys och datalogi, NADA, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2856.

Full text
Abstract:
This thesis studies the problems of feature tracking and motion estimation and presents an application of these concepts to human-computer interaction. The presentation is divided into three parts. The first part addresses feature tracking in a multi-scale context. Features in an image appear at different scales, and these scales can be expected to change over time due to the size variations that occur when objects move relative to the camera. A scheme for feature tracking is presented, which incorporates a mechanism for automatic scale selection and it is argued that such a mechanism is necessary to handle size variations over time. Experiments demonstrate how the proposed scheme is robust to size variations in situations where a traditional fixed scale tracker fails. This leads to extended feature trajectories, which are valuable for motion and structure estimation. It is also shown how an object representation suitable for tracking can be built in a conceptually simple way as a multi-scale feature hierarchy with qualitative relations between features at different scales. Experiments illustrate the capability of the proposed hierarchy to handle occlusions and semirigid objects. The second part of the thesis develops a geometric framework for computing estimates of 3D structure and motion from sparse feature correspondences in monocular sequences. A tool is presented, called the centered affine trifocal tensor, for motion estimation from three affine views. Moreover, a factorization approach is developed which simultaneously handles point and line correspondences in multiple affine views. Experiments show the influence of several factors on the accuracy of the structure and motion estimates, including noise in the feature localization, perspective effects and the number of feature correspondences. This motion estimation framework is also applied to feature correspondences obtained from the abovementioned feature tracker. The last part integrates the functionalities from the first two parts into a pre-prototype system which explores new principles for human-computer interaction. The idea is to transfer 3D orientation to a computer using no other equipment than the operator’s hand.
QC 20100519
APA, Harvard, Vancouver, ISO, and other styles
26

Farnebäck, Gunnar. "Polynomial expansion for orientation and motion estimation." Doctoral thesis, Linköpings universitet, Bildbehandling, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54282.

Full text
Abstract:
This thesis introduces a new signal transform, called polynomial expansion, and based on this develops novel methods for estimation of orientation and motion. The methods are designed exclusively in the spatial domain and can be used for signals of any dimensionality. Two important concepts in the use of the spatial domain for signal processing is projections into subspaces, e.g. the subspace of second degree polynomials, and representations by frames, e.g. wavelets. It is shown how these concepts can be unified in a least squares framework for representation of nite dimensional vectors by bases, frames, subspace bases, and subspace frames. This framework is used to give a new derivation of normalized convolution, a method for signal analysis that takes uncertainty in signal values into account and also allows for spatial localization of the analysis functions. Polynomial expansion is a transformation which at each point transforms the signal into a set of expansion coefficients with respect to a polynomial local signal model. The expansion coefficients are computed using normalized convolution. As a consequence polynomial expansion inherits the mechanism for handling uncertain signals and the spatial localization feature allows good control of the properties of the transform. It is shown how polynomial expansion can be computed very efficiently. As an application of polynomial expansion, a novel method for estimation of orientation tensors is developed. A new concept for orientation representation, orientation functionals, is introduced and it is shown that orientation tensors can be considered a special case of this representation. By evaluation on a test sequence it is demonstrated that the method performs excellently. Considering an image sequence as a spatiotemporal volume, velocity can be estimated from the orientations present in the volume. Two novel methods for velocity estimation are presented, with the common idea to combine the orientation tensors over some region for estimation of the velocity field according to a parametric motion model, e.g. affine motion. The first method involves a simultaneous segmentation and velocity estimation algorithm to obtain appropriate regions. The second method is designed for computational efficiency and uses local neighborhoods instead of trying to obtain regions with coherent motion. By evaluation on the Yosemite sequence, it is shown that both methods give substantially more accurate results than previously published methods. Another application of polynomial expansion is a novel displacement estimation algorithm, i.e. an algorithm which estimates motion from only two consecutive frames rather than from a whole spatiotemporal volume. This approach is necessary when the motion is not temporally coherent, e.g. because the camera is affected by vibrations. It is shown how moving objects can robustly be detected in such image sequences by using the plane+parallax approach to separate out the background motion. To demonstrate the power of being able to handle uncertain signals it is shown  how normalized convolution and polynomial expansion can be computed for interlacedvideo signals. Together with the displacement estimation algorithm this gives a method to estimate motion from a single interlaced frame.
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Jae-Hak, and Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
28

Hemmendorff, Magnus. "Motion estimation and compensation in medical imaging /." Linköping : Univ, 2001. http://www.bibl.liu.se/liupubl/disp/disp2001/tek703s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Andersson, Kenneth. "Motion estimation for perceptual image sequence coding /." Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek794s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mitikiri, Praveen Kumar. "Rate distortion analysis for conditional motion estimation." Thesis, Wichita State University, 2008. http://hdl.handle.net/10057/2010.

Full text
Abstract:
Rate Distortion analysis is a branch of information theory that predicts the tradeoffs between rate and distortion in source coding. In this thesis, we present the rate distortion analysis for conditional motion estimation, a process that estimates motion based on a criterion that affects coding rate, complexity of coding scheme and quality of the reconstructed video. In order to guide the rate distortion analysis, we use a conditional motion estimation scheme that estimates motion for certain blocks selected based on significant changes. We begin by explaining the conditional motion estimation technique and the effect of decision criteria on the technique. We then model the motion vectors as Gaussian-Markov process and study the rate distortion tradeoffs in the video encoding scheme. The rate distortion bound derived in this manner is also validated with a practical approach.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering
Includes bibliographic references (leaves 28-31)
APA, Harvard, Vancouver, ISO, and other styles
31

Armitano, Robert M. "Efficient motion-estimation algorithms for video coding." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/15397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chung, Hing-yip Ronald, and 鍾興業. "Fast motion estimation with search center prediction." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31220721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ildiz, Faith. "Estimation of motion parameters from image sequences." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28176.

Full text
Abstract:
The image motion analysis algorithms that generate the two dimensional velocity of objects in a sequence of images are developed. The algorithms considered consist of: the parallel extended kalman filter method; the spatiotemporal gradient methods; the spatiotemporal frequency methods; and the one-dimensional FFT methods. These algorithms are designed to perform on low signal to noise ratio images. Each of these algorithms is applied to a sequence of computer generated images with varying signal to noise ratios. Simulations are used to evaluate the performance of each algorithm. (Author)
APA, Harvard, Vancouver, ISO, and other styles
34

Guo, Yizhou. "Motion estimation and object-based video coding." Thesis, University of Bath, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yan, Hongshi. "Robust optical flow estimation and motion segmentation." Thesis, University of Warwick, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Porter, Sarah Victoria. "Video segmentation and indexing using motion estimation." Thesis, University of Bristol, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

GONZALES, JOSE ANTONIO CASTINEIRA. "EVALUATING MOTION ESTIMATION ALGORITHMS FOR VIDEO COMPRESSION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1996. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8711@1.

Full text
Abstract:
Este trabalho teve por objetivo estudar algoritmos de estimação de movimento baseados na técnica de casamento de bloco a fim de avaliar a importância da sua escolha na construção de um codificador para uso em compressão de seqüência de imagens. Para isto foram estudados quatro algoritmos baseados na técnica de casamento de bloco, sendo verificada a interdependência existente entre os vários parâmetros que os compõem, tais como, tamanho da área de busca, critérios de medida de distorção entre blocos e tamanhos de blocos, em relação à qualidade da imagem reconstruída.
This work was performed to study motion estimation algorithms based on block matching in order to evaluate the importance of the choice of the motion estimation algorithm in the Project of a image sequence compression coder. In order to do so, they were studied four motion estimation algorithms, and their performance were evaluated considering some parameters such as search region size, methods to measure the matching between blocks and block sizes, related to the quality of the reconstructed image.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Peng. "Joint Estimation and Calibration for Motion Sensor." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286839.

Full text
Abstract:
In the thesis, a calibration method for positions of each accelerometer in an Inertial Sensor Array (IMU) sensor array is designed and implemented. In order to model the motion of the sensor array in the real world, we build up a state space model. Based on the model we use, the problem is to estimate the parameters within the state space model. In this thesis, this problem is solved using Maximum Likelihood (ML) framework and two methods are implemented and analyzed. One is based on Expectation Maximization (EM) and the other is to optimize the cost function directly using Gradient Descent (GD). In the EM algorithm, an ill-conditioned problem exists in the M step, which degrades the performance of the algorithm especially when the initial error is small, and the final Mean Square Error (MSE) curve will diverge in this case. The EM algorithm with enough data samples works well when the initial error is large. In the Gradient Descent method, a reformulation of the problem avoids the ill-conditioned problem. After the parameter estimation part, we analyze the MSE curve of these parameters through the Monte Carlo Simulation. The final MSE curves show that the Gradient Descent based method is more robust in handling the numerical issues of the parameter estimation. The Gradient Descent method is also robust to noise level based on the simulation result.
I denna rapport utvecklas och implementeras en kalibreringsmethod för att skatta positionen för en grupp av accelerometrar placerade i en så kallad IMU sensor array. För att beskriva rörelsen för hela sensorgruppen, härleds en dynamisk tillståndsmodell. Problemställningen är då att skatta parametrarna i tillståndsmodellen. Detta löses med hjälp av Maximum Likelihood-metoden (ML) där två stycken algoritmer implementeras och analyseras. En baseras på Expectation Maximization (EM) och i den andra optimeras kostnadsfunktionen direkt med gradientsökning. I EM-algoritmen uppstår ett illa konditionerat delproblem i M-steget, vilket försämrar algoritmens prestanda, speciellt när det initiala felet är litet. Den resulterande MSE-kurvan kommer att avvika i detta fall. Däremot fungerar EM-algoritmen väl när antalet datasampel är tillräckligt och det initiala felet är större. I gradientsökningsmetoden undviks konditioneringsproblemen med hjälp av en omformulering. Slutligen analyseras medelkvadratfelet (MSE) för parameterskattningarna med hjälp av Monte Carlo-simulering. De resulterande MSE-kurvorna visar att gradientsökningsmetoden är mer robust mot numeriska problem, speciellt när det initiala felet är litet. Simuleringarna visar även att gradientsökning är robust mot brus.
APA, Harvard, Vancouver, ISO, and other styles
39

Ruhnau, Paul. "Variational fluid motion estimation with physical priors." [S.l. : s.n.], 2006. http://madoc.bib.uni-mannheim.de/madoc/volltexte/2007/1432/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mitikiri, Praveen Kumar Namuduri Kamiswara. "Rate distortion analysis for conditional motion estimation." A link to full text of this thesis in SOAR, 2008. http://hdl.handle.net/10057/2010.

Full text
Abstract:
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering.
Copyright 2008 by Praveen Kumar Mitikiri. All Rights Reserved. Includes bibliographical references (leaves 28-31).
APA, Harvard, Vancouver, ISO, and other styles
41

Chan, Wing Cheong. "Fast global motion estimation and color interpolation /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20CHANW.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 74-76). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
42

Jiang, Hai. "Quantitative deformable motion estimation for biophysical analysis /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

"Motion estimation and segmentation." Thesis, 2008. http://library.cuhk.edu.hk/record=b6074551.

Full text
Abstract:
Based on the fixed block size FWS algorithm, we further proposed a fast full-pel variable block size motion estimation algorithm called Fast Walsh Search in Variable Block Size (FWS-VBS). As in FWS, FWS-VBS employs the PSAD as the error measure to identify likely mismatches. Mismatches are rejected by thresholding method and the thresholds are determined adaptively to cater for different activity levels in each block. Early termination techniques are employed to further reduce the number of candidates and modes to be searched of each block. FWS-VBS performs equally well to the exhaustive full search algorithm in the reference H.264/AVC encoder and requires only about 10% of the computation time.
Furthermore, we modified our proposed segmentation algorithm to handle video sequences that are already encoded in the H.264 format. Since the video is compressed, no spatial information is available. Instead, quantized transform coefficients of the residual frame are used to approximate spatial information and improve segmentation result. The computation time of the segmentation process is merely about 16ms per frame for CIF frame size video, allowing the algorithm to be applied in real-time applications such as video surveillance and conferencing.
In the first part of our research, we proposed a block matching algorithm called Fast Walsh Search (FWS) for video motion estimation. FWS employs two new error measures defined in Walsh Hadamard domain, which are partial sum-of-absolute difference (PSAD) and sum-of-absolute difference of DC coefficients (SADDCC). The algorithm first rejects most mismatched candidates using PSAD which is a coarse measure requiring little computation. Because of the energy packing ability of Walsh Hadamard transform (WHT) and the utilization of fast WHT computation algorithm, mismatched candidates are identified and rejected efficiently. Then the proposed algorithm identifies the matched candidate from the remaining candidates using SADDCC which is a more accurate measure and can reuse computation performed for PSAD. Experimental results show that FWS can give good visual quality to most of video scene with a reasonable amount of computation.
In the second part of our research, we developed a real-time video object segmentation algorithm. The motion information is obtained by FWS-VBS to minimize the computation time while maintaining an adequate accuracy. The algorithm makes use of the motion information to identify background motion model and moving objects. In order to preserve spatial and temporal continuity of objects, Markov random field (MRF) is used to model the foreground field. The block-based foreground object mask is obtained by minimizing the energy function of the MRF. The resulting object mask is then post-processed to generate a smooth object mask. Experimental results show that the proposed algorithm can effectively extract moving objects from different kind of sequences, at a speed of less than 100ms per frame for CIF frame size video.
Motion estimation is an important part in many video processing applications, such as video compression, object segmentation, and scene analysis. In all video compression applications, motion information is used to reduce temporal redundancy between frames, thus significantly reduce the required bitrate for transmission and storage of compressed video. In addition, in object-based video coding, video object can be automatically identified by its motion against the background.
Mak, Chun Man.
"June 2008."
Adviser: Wai-Kuen Cham.
Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1849.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references.
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Pei-Yao, and 陳沛堯. "Human Motion Extrapolation via Motion Analysis and 2D-3D Motion Estimation." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81683950888361420836.

Full text
Abstract:
碩士
淡江大學
資訊工程學系碩士班
99
The 3D virtual reality is a very popular research area over the last decade. The 3D products are proposed in video game industries. But most of the 3D products or researches didn’t use the real images or videos to generate, so we proposed a method to adopt the real 2D images to render 3D-liked vision. Our goal is to show the object like Google Map, we can rotate the object we load and according to the angle to show the corresponding object. Generally, if we can use more than one camera, and shooting the object from the surrounding each viewpoint. Then, the convergence of motion of switching angle will be very easy. Because we can directly obtain the matching time. But such preparatory work is too expensive. The main idea is shooting a human at multi-angle, then we observe the regular pattern of human and analyze the motion of human. Finally, infer a rule for the relationship between the change of viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
45

Chuan-Yu, Cho. "Adaptive Motion Estimation Algorithm for Varied Motion Contents and VLSI Motion Estimation Architecture Design for H.264/AVC." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-0109200613403225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Cho, Chuan-Yu, and 卓傳育. "Adaptive Motion Estimation Algorithm for Varied Motion Contents and VLSI Motion Estimation Architecture Design for H.264/AVC." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/03014446886289901505.

Full text
Abstract:
博士
國立清華大學
資訊工程學系
94
Motion estimation (ME) plays an important role in H.264 not only because it has extremely computational complexity, but also it affects the following block coding modes as well as the final coded bit-stream size. Therefore, studying on advance ME algorithms is one of the most efficient ways to improve the coding efficiency of a video codec. In thesis, we exhibit two ME schemes with software- and hardware-based implementations, respectively. Based on the studying on fast block-matching algorithms (FBMAs) and a priority list, the software-based ME scheme is started with a study on FBMAs, and then their efficiencies are illustrated in terms of algorithm checking points. A priority list is introduced to help with classification of motion content types of real world sequences. After doing statistical analyses on the proposed priority list and FBMAs, we propose a motion-content adaptive FBMA, which can adaptively switching searching strategies among three different FBMAs to maximize the coding efficiency under the considerations of motion-content variations. The latest H.264/AVC video coding standard adopts the variable block size (VBS) block partitions and multiple reference frames (MRF), which make the motion-compensation stage become extreme complicated. To save intermediate memory and maximum the hardware utilization, we propose an embedded merging scheme with a pipeline-based MRF extension. With this embedded design, only one copy of intermediate memory is required and fully utilization is expected after fulfilling the pipeline stages.
APA, Harvard, Vancouver, ISO, and other styles
47

Demirdjian, D., and T. Darrell. "Motion Estimation from Disparity Images." 2001. http://hdl.handle.net/1721.1/6079.

Full text
Abstract:
A new method for 3D rigid motion estimation from stereo is proposed in this paper. The appealing feature of this method is that it directly uses the disparity images obtained from stereo matching. We assume that the stereo rig has parallel cameras and show, in that case, the geometric and topological properties of the disparity images. Then we introduce a rigid transformation (called d-motion) that maps two disparity images of a rigidly moving object. We show how it is related to the Euclidean rigid motion and a motion estimation algorithm is derived. We show with experiments that our approach is simple and more accurate than standard approaches.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhong, Kun-Zhe, and 鍾昆哲. "Backward Motion Estimation For Interpolation." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/62855169049956264029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Yu, Chong-Shou, and 余宗修. "Adaptive Fast Motion-Estimation Algorithms." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/06188544678702312191.

Full text
Abstract:
博士
國立成功大學
電機工程學系碩博士班
96
Videos, music and various multimedia objects can be seen in modern digital life everywhere. Multimedia communication relies on data compression technology in order to reduce the bits of data transmission. Video data are needed to be compressed before storage or transmission. Video compression exploits temporal redundancy between video frames to achieve higher compression efficiency. In most video coding standards motion compensation is the key function that does this job, which needs the information of a motion vector that is derived by motion estimation. Due to the heavy computational cost of motion estimation many fast algorithms have been proposed in recent years. Among them, some algorithms use a regional search that tweaks the result of the motion vector gradually. Some use logarithm based fast search strategies. However, small search pattern based motion-estimation algorithms usually spend large amounts of computational time in order to estimate large motion vectors, while other mid-sized search pattern algorithms usually spend extra search steps to confirm near-zero motion vectors. Divide and conquer methods, on the other hand, are not optimized for the probability model of motion vectors. This truth makes these logarithm based algorithms wasting additional time in estimating static motions which can easily be found by small search pattern based algorithms easily. In this dissertation three adaptive fast motion-estimation algorithms are proposed. The first of them employs a search pattern which is derived from the data clustering of motion vectors. The statistics of motion vectors are used to generate this pattern which is a pair of complementary double-layered (inner layer and outer layer) initial search patterns. The inner-layer search is applied first and tests for small motion. The outer layer search which is based on a logarithm search serves as a guard line to catch large motion. The simulation results show that the necessary computations have been reduced to 9~12 block matching per macroblock, which is much lower than the 14~16 block matchings of the famous efficient three-step search. The visual quality of the motion compensation result at that time is also better than most other algorithms. The second algorithm - adaptive double-layered initial search pattern (ADLISP) is an improved version of the first one. Motion prediction is employed by ADLISP to improve the search speed. The positions of the search points of ADLISP are also adaptive to recent motions. By using the pseudo median points of motion vector distribution probability, the simulation results of ADLISP show even better search efficiency over the logarithm-based fast motion-estimation algorithms. The average search points used per macroblock is decreased to 6.96, which is only half of the necessary amount of search points that efficient three-step search uses. On average, ADLISP only increases in the mean square error (MSE) of motion compensated video by 11.2%, which is nearly the same level compared to diamond search which is efficient in estimating small motions. The peak signal-to-noise (PSNR) loss is only 0.408dB compared to full search. In the third fast motion-estimation algorithm, the motion and distortion information of neighboring macroblocks are referenced to decide the best search mode. In different modes, different amounts of search points are used. The positions of these search points are adaptive to recent estimated motion vectors. Furthermore, an early search termination mechanism is used in some situations to save computation power. This algorithm tries to keep a suitable amount search points in suitable position to catch any scale of motions. Experimental results show that this algorithm can estimate motion vectors extremely quickly without conspicuous visual quality loss. Simulations in 23 different standard video test sequences show that the average necessary search points is reduced to 4.573, while the peak signal-to-noise loss is merely 0.336dB compared to full search.
APA, Harvard, Vancouver, ISO, and other styles
50

SHI, SHYH-YUE, and 施世聿. "Motion Estimation Using Wavelet Trasnform." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/10714463771594927502.

Full text
Abstract:
碩士
國立中山大學
應用數學研究所
82
In this report, the moving tracks of objects in motion are derived by applying the Wavelet transforms. Conventionally, two major categories of motion estimation methods are used to find out the motion vector. The first one is the block matching method , which divides an image into blocks of the same size and compares the blocks in the current frame to the corresponding blocks in the previous frame to derive the best matching block. The motion vector can be obtained by estimating the corresponding block displacement. This method has the disadvantage that the result is sensitive to the choice of block size. The second category transforms the image by using certain transformation method, mostly, Fourier transform. But the Fourier transform method has the disadvantage that two objects with the same velocity can not be discriminated. Therefore, if there are two objects with the same velocity, they will be falsely identified as only one. Before applying Wavelet transform, the 2-dimensional moving function is first projected to both x-axis and y-axis. These two projections are then applied by Wavelet transform one at a time. By observing the transformed graph, the motion vector can be estimated. The initial position can be obtained. Two objects with the same velocity can also be discriminated. n-degree polynomials are used to approximate the tracks which can be described by smooth functions. In this report, an overview of various motion estimation methods is given first. Then an introduction of the Wavelet theory is presented. Next comes the application of Wavelet theory on the estimation of motion vector . Finally, computer simulations are performed and performances compared with that of the Fourier transform methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography