Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Correspondence estimation.

Dissertationen zum Thema „Correspondence estimation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-32 Dissertationen für die Forschung zum Thema "Correspondence estimation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Qi, Zhen. „Pose estimation using points to regions correspondence“. Laramie, Wyo. : University of Wyoming, 2008. http://proquest.umi.com/pqdweb?did=1663060061&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hewa, Thondilege Akila Sachinthani Pemasiri. „Multimodal Image Correspondence“. Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235433/1/Akila%2BHewa%2BThondilege%2BThesis%281%29.pdf.

Der volle Inhalt der Quelle
Annotation:
Multimodal images are used across many application areas including medical and surveillance. Due to the different characteristics of different imaging modalities, developing image processing algorithms for multimodal images is challenging. This thesis proposes effective solutions for the challenging problem of multimodal semantic correspondence where the connections between similar components across images from different modalities are established. The proposed methods which are based on deep learning techniques have been applied for several applications including epilepsy type classification and 3D reconstruction of human hand from visible and X-ray image. These proposed algorithms can be adapted to many other imaging modalities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kazemi, Vahid. „Correspondence Estimation in Human Face and Posture Images“. Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-150115.

Der volle Inhalt der Quelle
Annotation:
Many computer vision tasks such as object detection, pose estimation,and alignment are directly related to the estimation of correspondences overinstances of an object class. Other tasks such as image classification andverification if not completely solved can largely benefit from correspondenceestimation. This thesis presents practical approaches for tackling the corre-spondence estimation problem with an emphasis on deformable objects.Different methods presented in this thesis greatly vary in details but theyall use a combination of generative and discriminative modeling to estimatethe correspondences from input images in an efficient manner. While themethods described in this work are generic and can be applied to any object,two classes of objects of high importance namely human body and faces arethe subjects of our experimentations.When dealing with human body, we are mostly interested in estimating asparse set of landmarks – specifically we are interested in locating the bodyjoints. We use pictorial structures to model the articulation of the body partsgeneratively and learn efficient discriminative models to localize the parts inthe image. This is a common approach explored by many previous works. Wefurther extend this hybrid approach by introducing higher order terms to dealwith the double-counting problem and provide an algorithm for solving theresulting non-convex problem efficiently. In another work we explore the areaof multi-view pose estimation where we have multiple calibrated cameras andwe are interested in determining the pose of a person in 3D by aggregating2D information. This is done efficiently by discretizing the 3D search spaceand use the 3D pictorial structures model to perform the inference.In contrast to the human body, faces have a much more rigid structureand it is relatively easy to detect the major parts of the face such as eyes,nose and mouth, but performing dense correspondence estimation on facesunder various poses and lighting conditions is still challenging. In a first workwe deal with this variation by partitioning the face into multiple parts andlearning separate regressors for each part. In another work we take a fullydiscriminative approach and learn a global regressor from image to landmarksbut to deal with insufficiency of training data we augment it by a large numberof synthetic images. While we have shown great performance on the standardface datasets for performing correspondence estimation, in many scenariosthe RGB signal gets distorted as a result of poor lighting conditions andbecomes almost unusable. This problem is addressed in another work wherewe explore use of depth signal for dense correspondence estimation. Hereagain a hybrid generative/discriminative approach is used to perform accuratecorrespondence estimation in real-time.

QC 20140919

APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bartosch, Nadine. „Correspondence-based pairwise depth estimation with parallel acceleration“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34372.

Der volle Inhalt der Quelle
Annotation:
This report covers the implementation and evaluation of a stereo vision corre- spondence-based depth estimation algorithm on a GPU. The results and feed- back are used for a Multi-view camera system in combination with Jetson TK1 devices for parallelized image processing and the aim of this system is to esti- mate the depth of the scenery in front of it. The performance of the algorithm plays the key role. Alongside the implementation, the objective of this study is to investigate the advantages of parallel acceleration inter alia the differences to the execution on a CPU which are significant for all the function, the imposed overheads particular for a GPU application like memory transfer from the CPU to the GPU and vice versa as well as the challenges for real-time and concurrent execution. The study has been conducted with the aid of CUDA on three NVIDIA GPUs with different characteristics and with the aid of knowledge gained through extensive literature study about different depth estimation algo- rithms but also stereo vision and correspondence as well as CUDA in general. Using the full set of components of the algorithm and expecting (near) real-time execution is utopic in this setup and implementation, the slowing factors are in- ter alia the semi-global matching. Investigating alternatives shows that results for disparity maps of a certain accuracy are also achieved by local methods like the Hamming Distance alone and by a filter that refines the results. Further- more, it is demonstrated that the kernel launch configuration and the usage of GPU memory types like shared memory is crucial for GPU implementations and has an impact on the performance of the algorithm. Just concurrency proves to be a more complicated task, especially in the desired way of realization. For the future work and refinement of the algorithm it is therefore recommended to invest more time into further optimization possibilities in regards of shared memory and into integrating the algorithm into the actual pipeline.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kurdila, Hannah Robertshaw. „Gappy POD and Temporal Correspondence for Lizard Motion Estimation“. Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83603.

Der volle Inhalt der Quelle
Annotation:
With the maturity of conventional industrial robots, there has been increasing interest in designing robots that emulate realistic animal motions. This discipline requires careful and systematic investigation of a wide range of animal motions from biped, to quadruped, and even to serpentine motion of centipedes, millipedes, and snakes. Collecting optical motion capture data of such complex animal motions can be complicated for several reasons. Often there is the need to use many high-quality cameras for detailed subject tracking, and self-occlusion, loss of focus, and contrast variations challenge any imaging experiment. The problem of self-occlusion is especially pronounced for animals. In this thesis, we walk through the process of collecting motion capture data of a running lizard. In our collected raw video footage, it is difficult to make temporal correspondences using interpolation methods because of prolonged blurriness, occlusion, or the limited field of vision of our cameras. To work around this, we first make a model data set by making our best guess of the points' locations through these corruptions. Then, we randomly eclipse the data, use Gappy POD to repair the data and then see how closely it resembles the initial set, culminating in a test case where we simulate the actual corruptions we see in the raw video footage.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Besse, F. O. „PatchMatch belief propagation for correspondence field estimation and its applications“. Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1409029/.

Der volle Inhalt der Quelle
Annotation:
Correspondence fields estimation is an important process that lies at the core of many different applications. Is it often seen as an energy minimisation problem, which is usually decomposed into the combined minimisation of two energy terms. The first is the unary energy, or data term, which reflects how well the solution agrees with the data. The second is the pairwise energy, or smoothness term, and ensures that the solution displays a certain level of smoothness, which is crucial for many applications. This thesis explores the possibility of combining two well-established algorithms for correspondence field estimation, PatchMatch and Belief Propagation, in order to benefit from the strengths of both and overcome some of their weaknesses. Belief Propagation is a common algorithm that can be used to optimise energies comprising both unary and pairwise terms. It is however computational expensive and thus not adapted to continuous spaces which are often needed in imaging applications. On the other hand, PatchMatch is a simple, yet very efficient method for optimising the unary energy of such problems on continuous and high dimensional spaces. The algorithm has two main components: the update of the solution space by sampling and the use of the spatial neighbourhood to propagate samples. We show how these components are related to the components of a specific form of Belief Propagation, called Particle Belief Propagation (PBP). PatchMatch however suffers from the lack of an explicit smoothness term. We show that unifying the two approaches yields a new algorithm, PMBP, which has improved performance compared to PatchMatch and is orders of magnitude faster than PBP. We apply our new optimiser to two different applications: stereo matching and optical flow. We validate the benefits of PMBP through series of experiments and show that we consistently obtain lower errors than both PatchMatch and Belief Propagation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sellent, Anita [Verfasser], und Marcus [Akademischer Betreuer] Magnor. „Dense Correspondence Field Estimation from Multiple Images / Anita Sellent ; Betreuer: Marcus Magnor“. Braunschweig : Technische Universität Braunschweig, 2011. http://d-nb.info/1175825697/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Barsai, Gabor. „DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATION“. The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1315855340.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Johnson, Amanda R. „A pose estimation algorithm based on points to regions correspondence using multiple viewpoints“. Laramie, Wyo. : University of Wyoming, 2008. http://proquest.umi.com/pqdweb?did=1798480891&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Linz, Christian [Verfasser], und Marcus [Akademischer Betreuer] Magnor. „Correspondence Estimation and Image Interpolation for Photo-Realistic Rendering / Christian Linz ; Betreuer: Marcus Magnor“. Braunschweig : Technische Universität Braunschweig, 2011. http://d-nb.info/1175825557/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Cetinkaya, Guven. „A Comparative Study On Pose Estimation Algorithms Using Visual Data“. Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614109/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Computation of the position and orientation of an object with respect to a camera from its images is called pose estimation problem. Pose estimation is one of the major problems in computer vision, robotics and photogrammetry. Object tracking, object recognition, self-localization of robots are typical examples for the use of pose estimation. Determining the pose of an object from its projections requires 3D model of an object in its own reference system, the camera parameters and 2D image of the object. Most of the pose estimation algorithms require the correspondences between the 3D model points of the object and 2D image points. In this study, four well-known pose estimation algorithms requiring the 2D-3D correspondences to be known a priori
namely, Orthogonal Iterations, POSIT, DLT and Efficient PnP are compared. Moreover, two other well-known algorithms that solve the correspondence and pose problems simultaneously
Soft POSIT and Blind- PnP are also compared in the scope of this thesis study. In the first step of the simulations, synthetic data is formed using a realistic motion scenario and the algorithms are compared using this data. In the next step, real images captured by a calibrated camera for an object with known 3D model are exploited. The simulation results indicate that POSIT algorithm performs the best among the algorithms requiring point correspondences. Another result obtained from the experiments is that Soft-POSIT algorithm can be considered to perform better than Blind-PnP algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Olofsson, Anders. „Modern Stereo Correspondence Algorithms : Investigation and Evaluation“. Thesis, Linköping University, Information Coding, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57853.

Der volle Inhalt der Quelle
Annotation:

Many different approaches have been taken towards solving the stereo correspondence problem and great progress has been made within the field during the last decade. This is mainly thanks to newly evolved global optimization techniques and better ways to compute pixel dissimilarity between views. The most successful algorithms are based on approaches that explicitly model smoothness assumptions made about the physical world, with image segmentation and plane fitting being two frequently used techniques.

Within the project, a survey of state of the art stereo algorithms was conducted and the theory behind them is explained. Techniques found interesting were implemented for experimental trials and an algorithm aiming to achieve state of the art performance was implemented and evaluated. For several cases, state of the art performance was reached.

To keep down the computational complexity, an algorithm relying on local winner-take-all optimization, image segmentation and plane fitting was compared against minimizing a global energy function formulated on pixel level. Experiments show that the local approach in several cases can match the global approach, but that problems sometimes arise – especially when large areas that lack texture are present. Such problematic areas are better handled by the explicit modeling of smoothness in global energy minimization.

Lastly, disparity estimation for image sequences was explored and some ideas on how to use temporal information were implemented and tried. The ideas mainly relied on motion detection to determine parts that are static in a sequence of frames. Stereo correspondence for sequences is a rather new research field, and there is still a lot of work to be made.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Brancamp, Tami Urbani. „Correspondence between nasalance scores and nasality ratings with equal appearing intervals and direct magnitude estimation scaling methods“. abstract and full text PDF (free order & download UNR users only), 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3307133.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Chen, Daniel Chien Yu. „Image segmentation and pose estimation of humans in video“. Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/66230/1/Daniel_Chen_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis introduces improved techniques towards automatically estimating the pose of humans from video. It examines a complete workflow to estimating pose, from the segmentation of the raw video stream to extract silhouettes, to using the silhouettes in order to determine the relative orientation of parts of the human body. The proposed segmentation algorithms have improved performance and reduced complexity, while the pose estimation shows superior accuracy during difficult cases of self occlusion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Dellagi, Hatem. „Estimations paramétrique et non paramétrique des données manquantes : application à l'agro-climatologie“. Paris 6, 1994. http://www.theses.fr/1994PA066546.

Der volle Inhalt der Quelle
Annotation:
Dans ce travail nous proposons deux méthodes d'estimation des données manquantes. Dans le cas de l'estimation paramétrique et afin de résoudre le problème par la prévision, nous exploitons l'estimateur décale (E. D) de la partie autorégressive d'un modèle ARMA scalaire pour estimer la matrice de covariance In dont la consistance forte est prouvée sous des conditions ayant l'avantage de s'exprimer en fonction des trajectoires et identifier les coefficients de la partie moyenne mobile et la variance du bruit blanc. En analyse des correspondances et afin d'estimer les données manquantes d'un tableau de correspondance, le problème se résout complètement dans le cas d'une seule donnée manquante. L'existence est prouvée dans le cas où il y en a plusieurs, par contre l'unicité étant délicate, une combinaison linéaire entre les données manquantes est obtenue à partir de la formule de la trace dont la minimisation assure l'homogénéité du tableau de correspondance, nous établirons sous le même critère la reconstitution d'une donnée d'origine à partir du codage linéaire par morceaux
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Brachmann, Eric. „Learning to Predict Dense Correspondences for 6D Pose Estimation“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-236564.

Der volle Inhalt der Quelle
Annotation:
Object pose estimation is an important problem in computer vision with applications in robotics, augmented reality and many other areas. An established strategy for object pose estimation consists of, firstly, finding correspondences between the image and the object’s reference frame, and, secondly, estimating the pose from outlier-free correspondences using Random Sample Consensus (RANSAC). The first step, namely finding correspondences, is difficult because object appearance varies depending on perspective, lighting and many other factors. Traditionally, correspondences have been established using handcrafted methods like sparse feature pipelines. In this thesis, we introduce a dense correspondence representation for objects, called object coordinates, which can be learned. By learning object coordinates, our pose estimation pipeline adapts to various aspects of the task at hand. It works well for diverse object types, from small objects to entire rooms, varying object attributes, like textured or texture-less objects, and different input modalities, like RGB-D or RGB images. The concept of object coordinates allows us to easily model and exploit uncertainty as part of the pipeline such that even repeating structures or areas with little texture can contribute to a good solution. Although we can train object coordinate predictors independent of the full pipeline and achieve good results, training the pipeline in an end-to-end fashion is desirable. It enables the object coordinate predictor to adapt its output to the specificities of following steps in the pose estimation pipeline. Unfortunately, the RANSAC component of the pipeline is non-differentiable which prohibits end-to-end training. Adopting techniques from reinforcement learning, we introduce Differentiable Sample Consensus (DSAC), a formulation of RANSAC which allows us to train the pose estimation pipeline in an end-to-end fashion by minimizing the expectation of the final pose error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Kumar, K. (Kushal). „Pose estimation using two line correspondences and gravity vector for image rectification“. Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201609142782.

Der volle Inhalt der Quelle
Annotation:
Pose estimation is a well-studied problem in computer vision. Many solutions which provide high accuracy depend on nonlinear optimization. For real-time applications, linear or closed-form solutions are preferred. Some relatively new methods also fuse inertial sensor data with that from the visual sensor to achieve higher accuracy. We propose a closed-form solution to estimate camera pose using two lines and gravity information. The system is developed so that it could work in unprepared environments which satisfy the Manhattan world assumption. We first test the proposed method on a synthetic data set and compare it to other state-of-the-art point and line based pose estimation methods, comparing their mean rotation and mean translation errors. I.M.U. noise effect on the overall performance of the system is also tested. We then proceed to test our algorithm in real world by rectifying perspective deformed images. The deviation of the calculated pose from the ground-truth pose is calculated for each image to test the real world performance of the proposed algorithm. Also, I.M.U. noise is calculated, which correspond to the 0.5% noise level expected in low cost I.M.U.’s.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Brachmann, Eric [Verfasser], Stefan [Akademischer Betreuer] Gumhold, Carsten [Akademischer Betreuer] Rother, Stefan [Gutachter] Gumhold und Jiri [Gutachter] Matas. „Learning to Predict Dense Correspondences for 6D Pose Estimation / Eric Brachmann ; Gutachter: Stefan Gumhold, Jiri Matas ; Stefan Gumhold, Carsten Rother“. Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://d-nb.info/1160875588/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Rosengren, Scott Clark. „On the formulation of inertial attitude estimation using point correspondences and differential carrier phase GPS positioning an application in structure from motion /“. [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0004565.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

彩香, 小野原, und Ayaka Onohara. „数理的アプローチからの言語変化と外言語的要素との関わりに関する研究“. Thesis, https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB12854565/?lang=0, 2014. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB12854565/?lang=0.

Der volle Inhalt der Quelle
Annotation:
言語変化の原因については、比較言語学や言語地理学双方の立場から、様々な議論がなされてきたが、この問題に対し、変化の原因と変化の結果という関係を多変量的に捉えて知見を得る方法は、十分な議論があるとはいえない。そこで本研究では、これまでの言語変化の議論を踏まえつつ、具体的な事例について、系統推定や重回帰分析、対応分析といった数理的な手法を用いて多変量的な分析を行い、言語変化の原因や原因別の変化の特徴を明らかにした。
博士(文化情報学)
Doctor of Culture and Information Science
同志社大学
Doshisha University
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Přibyl, Bronislav. „Odhad pózy kamery z přímek pomocí přímé lineární transformace“. Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-412595.

Der volle Inhalt der Quelle
Annotation:
Tato disertační práce se zabývá odhadem pózy kamery z korespondencí 3D a 2D přímek, tedy tzv. perspektivním problémem n  přímek (angl. Perspective- n -Line, PnL). Pozornost je soustředěna na případy s velkým počtem čar, které mohou být efektivně řešeny metodami využívajícími lineární formulaci PnL. Dosud byly známy pouze metody pracující s korespondencemi 3D bodů a 2D přímek. Na základě tohoto pozorování byly navrženy dvě nové metody založené na algoritmu přímé lineární transformace (angl. Direct Linear Transformation, DLT): Metoda DLT-Plücker-Lines pracující s korespondencemi 3D a 2D přímek a metoda DLT-Combined-Lines pracující jak s korespondencemi 3D bodů a 2D přímek, tak s korespondencemi 3D přímek a 2D přímek. Ve druhém případě je redundantní 3D informace využita k redukci minimálního počtu požadovaných korespondencí přímek na 5 a ke zlepšení přesnosti metody. Navržené metody byly důkladně testovány za různých podmínek včetně simulovaných a reálných dat a porovnány s nejlepšími existujícími PnL metodami. Metoda DLT-Combined-Lines dosahuje výsledků lepších nebo srovnatelných s nejlepšími existujícími metodami a zároveň je značně rychlá. Tato disertační práce také zavádí jednotný rámec pro popis metod pro odhad pózy kamery založených na algoritmu DLT. Obě navržené metody jsou definovány v tomto rámci.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Shih, Minh-Chou, und 石明周. „Image Retrieval with Relevance Feedback Based on Region Correspondence Estimation“. Thesis, 2003. http://ndltd.ncl.edu.tw/handle/86661488763618009395.

Der volle Inhalt der Quelle
Annotation:
碩士
國立清華大學
資訊工程學系
91
This thesis presents a region-based image retrieval approach with relevance feedback by taking the region correspondence into consideration. Region representation in image retrieval has been popular issues in the recent works because global features are insufficient to describe the local variations within images. Region correspondence estimation is one of the critical problems in region-based image similarity comparison. Intuitively, we can estimate the region correspondence by minimizing the matching error of matched regions. Since human perception matches images depending not only on their local attributes but also on their interrelationships, we must take the relationships of region connectivity into consideration when estimating the region correspondence. To estimate the region correspondence by considering the region attributes and the relationships of region connectivity, we represent images by graphs in which the nodes represent the regions and the edges represent the relationships of region connectivity. We then solve the region correspondence estimation problem via graph matching technique. However, the graph matching technique, which matches non-attributed graphs, does not completely solve our region correspondence problem. Thus, we modify the graph matching algorithm to satisfy our requirements. To further improve the retrieval results, we formulate the relevance feedback process as a maximum likelihood framework. We show that the maximization of the likelihood function has a closed form solution. The ideal query image, the region weights, and the feature weights are updated by the feedback images and their region correspondences. In experimental results, a series of experiments show that the proposed approach achieves good performance for various natural images with complex contents.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Chiu, Raymond. „Simultaneous Pose and Correspondence Problem for Visual Servoing“. Thesis, 2010. http://hdl.handle.net/10012/5286.

Der volle Inhalt der Quelle
Annotation:
Pose estimation is a common problem in computer vision. The pose is the combination of the position and orientation of a particular object relative to some reference coordinate system. The pose estimation problem involves determining the pose of an object from one or multiple images of the object. This problem often arises in the area of robotics. It is necessary to determine the pose of an object before it can be manipulated by the robot. In particular, this research focuses on pose estimation for initialization of position-based visual servoing. A closely related problem is the correspondence problem. This is the problem of finding a set of features from the image of an object that can be identified as the same feature from a model of the object. Solving for pose without known corre- spondence is also refered to as the simultaneous pose and correspondence problem, and it is a lot more difficult than solving for pose with known correspondence. This thesis explores a number of methods to solve the simultaneous pose and correspondence problem, with focuses on a method called SoftPOSIT. It uses the idea that the pose is easily determined if correspondence is known. It first produces an initial guess of the pose and uses it to determine a correspondence. With the correspondence, it determines a new pose. This new pose is assumed to be a better estimate, thus a better correspondence can be determined. The process is repeated until the algorithm converges to a correspondence pose estimate. If this pose estimate is not good enough, the algorithm is restarted with a new initial guess. An improvement is made to this algorithm. An early termination condition is added to detect conditions where the algorithm is unlikely to converge towards a good pose. This leads to an reduction in the runtime by as much as 50% and improvement in the success rate of the algorithm by approximately 5%. The proposed solution is tested and compared with the RANSAC method and simulated annealing in a simulation environment. It is shown that the proposed solution has the potential for use in commercial environments for pose estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

CHEN, CHIUNG-HUI, und 陳瓊慧. „Correspondence between Correlation Coefficient Estimation and Chinese and English Probability Vocabulary“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/m6f6qt.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺中教育大學
教育資訊與測驗統計研究所碩士在職專班
107
According to the research of Hu, Peng, Shen, and Yang (Hu, Peng, Shen, & Yang, 1989), it is found that the estimated coefficient of correlation coefficient represented by Chinese probability vocabulary will vary depending on the perception of the subject. In addition, in the professional field, more precise expression is needed, because if it is wrong, it will have a great impact. For example, in the financial (Du, & Huang, 2012) research on auditing and accounting personnel, in the study of international accounting standards English is the main language; auditors need to be cautious when using Chinese probabilistic vocabulary. In medical treatment, doctors suggest that surgery or medication may be needed; in military case, Beyth-Marom (1982) suggests that it may invade Poland. The war on both sides is influential, and the use of probability vocabulary is more precise. Therefore, this study conducted a paired test of Chinese probabilities vocabulary and statistical probability values for college students from different backgrounds, and further understood the background and probability of college students' use of background and Chinese and English probabilities. This study is based on the content of the papers by Hu et al. (Hu, Peng, Shen, & Yang, 1989). After the addition, a paired test of Chinese probabilities vocabulary and correlation coefficient maps was prepared, and a time-limited 20-second group test method was adopted to conduct research. The number of students surveyed by a university in Central Taiwan was 79, including local students and foreign students from relevant departments such as the School of Management, the Faculty of Science, and the Faculty of Arts. This study collects and analyzes the corresponding relationship between the subjects' Pearson correlation coefficient graph interpretation response, correlation coefficient estimation value, Chinese probability vocabulary, English probability vocabulary and correlation coefficient range. Through the functions Concatenate and Countif in Excel, the filled-in text is counted as a number, and a Contingency Table, also known as a cross-tab, is generated. Keywords: Chinese probability vocabulary, English probability vocabulary, correlation coefficient, contingency table.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Dengler, Joachim. „Estimation of Discontinuous Displacement Vector Fields with the Minimum Description Length Criterion“. 1990. http://hdl.handle.net/1721.1/5993.

Der volle Inhalt der Quelle
Annotation:
A new noniterative approach to determine displacement vector fields with discontinuities is described. In order to overcome the limitations of current methods, the problem is regarded as a general modelling problem. Starting from a family of regularized estimates, by measuring the difference in description length the compatibility between different levels of regularization is determined. This gives local but noisy evidence of possible model boundaries at multiple scales. With the two constraints of continous lines of discontinuities and the spatial coincidence assumption consistent boundary evidence is found. Based on this combined evidence the model is updated, now describing homogeneous regions with sharp discontinuities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Καπρινιώτης, Αχιλλέας. „Εκτίμηση βάθους σκηνής από κάμερα τοποθετημένη σε αυτοκίνητο που κινείται“. Thesis, 2014. http://hdl.handle.net/10889/7779.

Der volle Inhalt der Quelle
Annotation:
Στη διπλωματική αυτή εργασία αναλύεται η εκτίμηση του βάθους μίας άκαμπτης σκηνής από κάμερα τοποθετημένη σε αυτοκίνητο που κινείται. Στο κεφάλαιο 1 γίνεται μία εισαγωγή στον τομέα της Υπολογιστικής Όρασης και δίνονται μερικά παραδείγματα εφαρμογών της. Στο κεφάλαιο 2 περιγράφονται βασικές αρχές της προβολικής γεωμετρίας που χρησιμοποιείται ως μαθηματικό υπόβαθρο για τα επόμενα κεφάλαια. Στο κεφάλαιο 3 γίνεται λόγος για το θεωρητικό μοντέλο της κάμερας, των παραμέτρων της και των παραμορφώσεων που υπεισέρχονται στο μοντέλο αυτό. Στο κεφάλαιο 4 αναφέρεται η διαδικασία βαθμονόμησης της κάμερας, μαζί με την υλοποίησή της. Στο κεφάλαιο 5 παρουσιάζονται γενικές κατηγορίες των στερεοσκοπικών αλγορίθμων που χρησιμοποιούνται, καθώς και τα κατάλληλα μέτρα ομοιότητάς τους. Στο κεφάλαιο 6 γίνεται αναφορά στον ανιχνευτή γωνιών Harris και γίνεται η εφαρμογή του τόσο ως προς την ανίχνευση των γωνιών, όσο και ως προς την αντιστοίχιση των 2 εικόνων. Στο κεφάλαιο 7 αναλύεται η θεωρία του αλγόριθμου SIFT και δίνεται ένα παράδειγμα ανίχνευσης και αντιστοίχισης χαρακτηριστικών. Στο κεφάλαιο 8 επισημαίνονται οι βασικές αρχές της επιπολικής γεωμετρίας, καθώς η σημασία της διόρθωσης των εικόνων. Στο κεφάλαιο 9 αναφέρεται η συνολική διαδικασία που ακολουθήθηκε, μαζί με την περιγραφή και την υλοποίηση των μεθόδων εκτίμησης βάθους που χρησιμοποιήθηκαν.
The current master’s thesis analyzes the depth estimation of a rigid scene from a camera attached to a moving vehicle. The first chapter gives an introduction to the field of Computer Vision and provides some examples of its applications. The second chapter describes basic principles of projective geometry that are being used as mathematical background for the next chapters. The third chapter refers to the theoretical modeling of a camera, along with its parameters and the distortions that appear in this model. The forth chapter deals with the camera calibration procedure, along with its implementation. Chapter five presents general categories of stereoscopic algorithms, along with their similarity measures. Chapter six talks about Harris corner detector and its implementation in detecting corners and in the matching process as well. Chapter 7 analyzes the SIFT algorithm theory and gives an example of detecting and matching features. Chapter 8 highlights basic principles of epipolar geometry and stresses out the importance of image rectification. Chapter nine presents the procedure that has been followed, along with the description and implementation of the depth estimation methods that have been used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Chiang, Yi, und 江懿. „A Prioritized Gauss-Seidel Method for Dense Correspondence Estimation and Motion Segmentation in Crowded Urban Areas with a Moving Depth Camera“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/40104611508341776709.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣大學
資訊工程學研究所
102
Dense RGB-D video motion segmentation is an important preprocessing module in computer vision, image processing and robotics. A motion segmentation algorithm based on an optimization framework which utilizes depth information only is presented in this thesis. The proposed optimization framework segments and estimates rigid motion parameters of each locally rigid moving objects with coherent motion. The proposed method also calculates dense point correspondences while performing segmentation. An efficient numerical algorithm based on Constrained Block Nonlinear Gauss-Seidel (CNLGS) algorithm [1] and Prioritized Step Search [2] is proposed to solve the optimization problem. It classifies variables including point correspondences into groups and determines the ordering of variables to optimize. We prove the proposed numerical algorithm to converge to a theoretical bound. The proposed algorithm works well with a moving camera in highly dynamic urban scenes with non-rigid moving objects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Tran, Quoc Huy. „Robust parameter estimation in computer vision: geometric fitting and deformable registration“. Thesis, 2014. http://hdl.handle.net/2440/86270.

Der volle Inhalt der Quelle
Annotation:
Parameter estimation plays an important role in computer vision. Many computer vision problems can be reduced to estimating the parameters of a mathematical model of interest from the observed data. Parameter estimation in computer vision is challenging, since vision data unavoidably have small-scale measurement noise and large-scale measurement errors (outliers) due to imperfect data acquisition and preprocessing. Traditional parameter estimation methods developed in the statistics literature mainly deal with noise and are very sensitive to outliers. Robust parameter estimation techniques are thus crucial for effectively removing outliers and accurately estimating the model parameters with vision data. The research conducted in this thesis focuses on single structure parameter estimation and makes a direct contribution to two specific branches under that topic: geometric fitting and deformable registration. In geometric fitting problems, a geometric model is used to represent the information of interest, such as a homography matrix in image stitching, or a fundamental matrix in three-dimensional reconstruction. Many robust techniques for geometric fitting involve sampling and testing a number of model hypotheses, where each hypothesis consists of a minimal subset of data for yielding a model estimate. It is commonly known that, due to the noise added to the true data (inliers), drawing a single all-inlier minimal subset is not sufficient to guarantee a good model estimate that fits the data well; the inliers therein should also have a large spatial extent. This thesis investigates a theoretical reasoning behind this long-standing principle, and shows a clear correlation between the span of data points used for estimation and the quality of model estimate. Based on this finding, the thesis explains why naive distance-based sampling fails as a strategy to maximise the span of all-inlier minimal subsets produced, and develops a novel sampling algorithm which, unlike previous approaches, consciously targets all-inlier minimal subsets with large span for robust geometric fitting. The second major contribution of this thesis relates to another computer vision problem which also requires the knowledge of robust parameter estimation: deformable registration. The goal of deformable registration is to align regions in two or more images corresponding to a common object that can deform nonrigidly such as a bending piece of paper or a waving flag. The information of interest is the nonlinear transformation that maps points from one image to another, and is represented by a deformable model, for example, a thin plate spline warp. Most of the previous approaches to outlier rejection in deformable registration rely on optimising fully deformable models in the presence of outliers due to the assumption of the highly nonlinear correspondence manifold which contains the inliers. This thesis makes an interesting observation that, for many realistic physical deformations, the scale of errors of the outliers usually dwarfs the nonlinear effects of the correspondence manifold on which the inliers lie. The finding suggests that standard robust techniques for geometric fitting are applicable to model the approximately linear correspondence manifold for outlier rejection. Moreover, the thesis develops two novel outlier rejection methods for deformable registration, which are based entirely on fitting simple linear models and shown to be considerably faster but at least as accurate as previous approaches.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2014
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Brachmann, Eric. „Learning to Predict Dense Correspondences for 6D Pose Estimation“. Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A31057.

Der volle Inhalt der Quelle
Annotation:
Object pose estimation is an important problem in computer vision with applications in robotics, augmented reality and many other areas. An established strategy for object pose estimation consists of, firstly, finding correspondences between the image and the object’s reference frame, and, secondly, estimating the pose from outlier-free correspondences using Random Sample Consensus (RANSAC). The first step, namely finding correspondences, is difficult because object appearance varies depending on perspective, lighting and many other factors. Traditionally, correspondences have been established using handcrafted methods like sparse feature pipelines. In this thesis, we introduce a dense correspondence representation for objects, called object coordinates, which can be learned. By learning object coordinates, our pose estimation pipeline adapts to various aspects of the task at hand. It works well for diverse object types, from small objects to entire rooms, varying object attributes, like textured or texture-less objects, and different input modalities, like RGB-D or RGB images. The concept of object coordinates allows us to easily model and exploit uncertainty as part of the pipeline such that even repeating structures or areas with little texture can contribute to a good solution. Although we can train object coordinate predictors independent of the full pipeline and achieve good results, training the pipeline in an end-to-end fashion is desirable. It enables the object coordinate predictor to adapt its output to the specificities of following steps in the pose estimation pipeline. Unfortunately, the RANSAC component of the pipeline is non-differentiable which prohibits end-to-end training. Adopting techniques from reinforcement learning, we introduce Differentiable Sample Consensus (DSAC), a formulation of RANSAC which allows us to train the pose estimation pipeline in an end-to-end fashion by minimizing the expectation of the final pose error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Chiou, Wan-Chien, und 邱莞茜. „Color Transfer Based on Intrinsic Component and Graph-Theoretic Region Correspondence Estimatin“. Thesis, 2010. http://ndltd.ncl.edu.tw/handle/36702937070082902680.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Ευαγγελίδης, Γεώργιος. „Ανάπτυξη αποδοτικών παραμετρικών τεχνικών αντιστοίχισης εικόνων με εφαρμογή στην υπολογιστική όραση“. Thesis, 2008. http://nemertes.lis.upatras.gr/jspui/handle/10889/1229.

Der volle Inhalt der Quelle
Annotation:
Μια από τις συνεχώς εξελισσόμενες περιοχές της επιστήμης των υπολογιστών είναι η Υπολογιστική Όραση, σκοπός της οποίας είναι η δημιουργία έξυπνων συστημάτων για την ανάκτηση πληροφοριών από πραγματικές εικόνες. Πολλές σύγχρονες εφαρμογές της υπολογιστικής όρασης βασίζονται στην αντιστοίχιση εικόνων. Την πλειοψηφία των αλγορίθμων αντιστοίχισης συνθέτουν παραμετρικές τεχνικές, σύμφωνα με τις οποίες υιοθετείται ένα παραμετρικό μοντέλο, το οποίο εφαρμοζόμενο στη μια εικόνα δύναται να παρέχει μια προσέγγιση της άλλης. Στο πλαίσιο της διατριβής μελετάται εκτενώς το πρόβλημα της Στερεοσκοπικής Αντιστοίχισης και το γενικό πρόβλημα της Ευθυγράμμισης Εικόνων. Για την αντιμετώπιση του πρώτου προβλήματος προτείνεται ένας τοπικός αλγόριθμος διαφορικής αντιστοίχισης που κάνει χρήση μιας νέας συνάρτησης κόστους, του Τροποποιημένου Συντελεστή Συσχέτισης (ECC), η οποία ενσωματώνει το παραμετρικό μοντέλο μετατόπισης στον κλασικό συντελεστή συσχέτισης. Η ενσωμάτωση αυτή καθιστά τη νέα συνάρτηση κατάλληλη για εκτιμήσεις ανομοιότητας με ακρίβεια μικρότερη από αυτήν του εικονοστοιχείου. Αν και η συνάρτηση αυτή είναι μη γραμμική ως προς την παράμετρο μετατόπισης, το πρόβλημα μεγιστοποίησης έχει κλειστού τύπου λύση με αποτέλεσμα τη μειωμένη πολυπλοκότητα της διαδικασίας της αντιστοίχισης με ακρίβεια υπο-εικονοστοιχείου. Ο προτεινόμενος αλγόριθμος παρέχει ακριβή αποτελέσματα ακόμα και κάτω από μη γραμμικές φωτομετρικές παραμορφώσεις, ενώ η απόδοσή του υπερτερεί έναντι γνωστών στη διεθνή βιβλιογραφία τεχνικών αντιστοίχισης ενώ φαίνεται να είναι απαλλαγμένος από το φαινόμενο pixel locking. Στην περίπτωση του προβλήματος της ευθυγράμμισης εικόνων, η προτεινόμενη συνάρτηση γενικεύεται με αποτέλεσμα τη δυνατότητα χρήσης οποιουδήποτε δισδιάστατου μετασχηματισμού. Η μεγιστοποίησή της, η οποία αποτελεί ένα μη γραμμικό πρόβλημα, επιτυγχάνεται μέσω της επίλυσης μιας ακολουθίας υπο-προβλημάτων βελτιστοποίησης. Σε κάθε επανάληψη επιβάλλεται η μεγιστοποίηση μιας μη γραμμικής συνάρτησης του διανύσματος διορθώσεων των παραμέτρων, η οποία αποδεικνύεται ότι καταλήγει στη λύση ενός γραμμικού συστήματος. Δύο εκδόσεις του σχήματος αυτού προτείνονται: ο αλγόριθμος Forwards Additive ECC (FA-ECC) και o αποδοτικός υπολογιστικά αλγόριθμος Inverse Compositional ECC (IC-ECC). Τα προτεινόμενα σχήματα συγκρίνονται με τα αντίστοιχα (FA-LK και SIC) του αλγόριθμου Lucas-Kanade, ο οποίος αποτελεί σημείο αναφοράς στη σχετική βιβλιογραφία, μέσα από μια σειρά πειραμάτων. Ο αλγόριθμος FA-ECC παρουσιάζει όμοια πολυπλοκότητα με τον ευρέως χρησιμοποιούμενο αλγόριθμο FA-LΚ και παρέχει πιο ακριβή αποτελέσματα ενώ συγκλίνει με αισθητά μεγαλύτερη πιθανότητα και ταχύτητα. Παράλληλα, παρουσιάζεται πιο εύρωστος σε περιπτώσεις παρουσίας προσθετικού θορύβου, φωτομετρικών παραμορφώσεων και υπερ-μοντελοποίησης της γεωμετρικής παραμόρφωσης των εικόνων. Ο αλγόριθμος IC-ECC κάνει χρήση της αντίστροφης λογικής, η οποία στηρίζεται στην αλλαγή των ρόλων των εικόνων αντιστοίχισης και συνδυάζει τον κανόνα ενημέρωσης των παραμέτρων μέσω της σύνθεσης των μετασχηματισμών. Τα δύο αυτά χαρακτηριστικά έχουν ως αποτέλεσμα τη δραστική μείωση του υπολογιστικού κόστους, ακόμα και σε σχέση με τον SIC αλγόριθμο, με τον οποίο βέβαια παρουσιάζει παρόμοια συμπεριφορά. Αν και ο αλγόριθμος FA-ECC γενικά υπερτερεί έναντι των τριών άλλων αλγορίθμων, η επιλογή μεταξύ των δύο προτεινόμενων σχημάτων εξαρτάται από το λόγο μεταξύ ακρίβειας αντιστοίχισης και υπολογιστικού κόστους.
Computer Vision has been recently one of the most active research areas in computer society. Many modern computer vision applications require the solution of the well known image registration problem which consist in finding correspondences between projections of the same scene. The majority of registration algorithms adopt a specific parametric transformation model, which is applied to one image, thus providing an approach of the other one. Towards the solution of the Stereo Correspondence problem, where the goal is the construction of the disparity map, a local differential algorithm is proposed which involves a new similarity criterion, the Enhanced Correlation Coefficient (ECC). This criterion is invariant to linear photometric distortions and results from the incorporation of a single parameter model into the classical correlation coefficient, defining thus a continuous objective function. Although the objective function is non-linear in translation parameter, its maximization results in a closed form solution, saving thus much computational burden. The proposed algorithm provides accurate results even under non-linear photometric distortions and its performance is superior to well known conventional stereo correspondence techniques. In addition, the proposed technique seems not to suffer from pixel locking effect and outperforms even stereo techniques, dedicated to the cancellation of this effect. For the image alignment problem, the maximization of a generalized version of ECC function that incorporates any 2D warp transformation is proposed. Although this function is a highly non-linear function of the warp parameters, an efficient iterative scheme for its maximization is developed. In each iteration of the new scheme, an efficient approximation of the nonlinear objective function is used leading to a closed form solution of low computational complexity. Two different iterative schemes are proposed; the Forwards Additive ECC (FA-ECC) and the Inverse Compositional ECC (IC-ECC) algorithm. Τhe proposed iterative schemes are compared with the corresponding schemes (FA-LK and SIC) of the leading Lucas-Kanade algorithm, through a series of experiments. FA-ECC algorithm makes use of the known additive parameter update rule and its computational cost is similar to the one required by the most widely used FA-LK algorithm. The proposed iterative scheme exhibits increased learning ability, since it converges faster with higher probability. This superiority is retained even in presence of additive noise and photometric distortion, as well as in cases of over-modelling the geometric distortion of the images. On the other hand, IC-ECC algorithm makes use of inverse logic by swapping the role of images and adopts the transformation composition update rule. As a consequence of these two options, the complexity per iteration is drastically reduced and the resulting algorithm constitutes the most computationally efficient scheme than three other above mentioned algorithms. However, empirical learning curves and probability of convergence scores indicate that the proposed algorithm has a similar performance to the one exhibited by SIC. Though FA-ECC seems to be clearly more robust in real situation conditions among all the above mentioned alignment algorithms, the choice between two proposed schemes necessitates a trade-off between accuracy and speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Taati, BABAK. „Generation and Optimization of Local Shape Descriptors for Point Matching in 3-D Surfaces“. Thesis, 2009. http://hdl.handle.net/1974/5107.

Der volle Inhalt der Quelle
Annotation:
We formulate Local Shape Descriptor selection for model-based object recognition in range data as an optimization problem and offer a platform that facilitates a solution. The goal of object recognition is to identify and localize objects of interest in an image. Recognition is often performed in three phases: point matching, where correspondences are established between points on the 3-D surfaces of the models and the range image; hypothesis generation, where rough alignments are found between the image and the visible models; and pose refinement, where the accuracy of the initial alignments is improved. The overall efficiency and reliability of a recognition system is highly influenced by the effectiveness of the point matching phase. Local Shape Descriptors are used for establishing point correspondences by way of encapsulating local shape, such that similarity between two descriptors indicates geometric similarity between their respective neighbourhoods. We present a generalized platform for constructing local shape descriptors that subsumes a large class of existing methods and allows for tuning descriptors to the geometry of specific models and to sensor characteristics. Our descriptors, termed as Variable-Dimensional Local Shape Descriptors, are constructed as multivariate observations of several local properties and are represented as histograms. The optimal set of properties, which maximizes the performance of a recognition system, depend on the geometry of the objects of interest and the noise characteristics of range image acquisition devices and is selected through pre-processing the models and sample training images. Experimental analysis confirms the superiority of optimized descriptors over generic ones in recognition tasks in LIDAR and dense stereo range images.
Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2009-09-01 11:07:32.084
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie