To see the other types of publications on this topic, follow the link: Large baseline image registration.

Dissertations / Theses on the topic 'Large baseline image registration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Large baseline image registration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Elassam, Abdelkarim. "Learning-based vanishing point detection and its application to large-baseline image registration." Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0084.

Full text
Abstract:
Cette thèse étudie la détection des points de fuite et de la ligne d'horizon ainsi que leur application à des tâches de localisation visuelle en environnement urbain. La localisation visuelle est un problème fondamental de vision par ordinateur qui vise à déterminer la position et l'orientation d'une caméra dans un environnement en se basant uniquement sur des informations visuelles. En environnements urbains et manufacturés, les points de fuite sont des repères visuels qui apportent des informations importantes sur la structure de la scène et leur détection est donc importante pour les tâches de reconstruction et de localisation. La thèse propose de nouvelles méthodes d'apprentissage profond pour surmonter les limites des approches existantes de détection de points de fuite. La première contribution clé introduit une nouvelle approche pour la détection de lignes d'horizon et de points de fuite. Contrairement à la plupart des méthodes existantes, cette méthode infère simultanément la ligne d'horizon et un nombre illimité de points de fuite horizontaux, même ceux s'étendant au-delà du cadre de l'image. La deuxième contribution clé de cette thèse est un détecteur de points de fuite amélioré par les structures de la scène. Cette méthode utilise un cadre d'apprentissage multitâche pour estimer plusieurs points de fuite horizontaux et générer les masques des structures planaires verticales correspondants à chaque point de fuite. Notre méthode fournit ainsi des informations essentielles sur la configuration de la scène. Contrairement aux méthodes existantes, cette approche exploite les informations contextuelles et les structures de la scène pour une estimation précise sans s'appuyer sur les lignes détectées. Les résultats expérimentaux démontrent que cette méthode surpasse les méthodes traditionnelles basées sur les lignes et les méthodes modernes basées sur l'apprentissage profond. La thèse explore ensuite l'utilisation des points de fuite pour la mise en correspondance et le recalage d'images, en particulier dans le cas où les images sont prises depuis des points de vue très différents. Malgré les progrès continus sur les extracteurs et les descripteurs d'indices, ces méthodes sont souvent inopérantes en présence de fortes variations d'échelle ou de points de vue. Les méthodes proposées relèvent ce défi en incorporant les points de fuite et les structures de la scène. L'un des défis majeurs liés à l'utilisation des points de fuite pour le recalage consiste à établir des correspondances fiables, en particulier dans des scénarios à large base. Ce travail relève ce défi en proposant une méthode de détection de points de fuite aidée par la détection des masques de structures verticales de scène correspondant à ces points de fuite. À notre connaissance, il s'agit de la première implémentation d'une méthode pour la mise en correspondance des points de fuite qui exploite le contenu de l'image et non seulement les segments détectés. Cette correspondance de points de fuite facilite l'estimation de la rotation relative de la caméra, en particulier dans les scénarios à large base. De plus, l'incorporation d'informations des structures de la scène permet une correspondance plus fiable des points clés au sein de ces structures. Par conséquent, la méthode facilite l'estimation de la translation relative, qui est contrainte elle-même par la rotation dérivée des points de fuite. La qualité de la rotation peut cependant parfois être impactée par l'imprécision des points de fuite détectés. Nous proposons donc une méthode de mise en correspondance d'image guidée par les points de fuite, qui est beaucoup moins sensible à la précision de détection des points de fuite
This thesis examines the detection of vanishing points and the horizon line and their application to visual localization tasks in urban environments. Visual localization is a fundamental problem in computer vision that aims to determine the position and orientation of a camera in an environment based solely on visual information. In urban and manufactured environments, vanishing points are important visual landmarks that provide crucial information about the scene's structure, making their detection important for reconstruction and localization tasks. The thesis proposes new deep learning methods to overcome the limitations of existing approaches to vanishing point detection. The first key contribution introduces a novel approach for HL and VP detection. Unlike most existing methods, this method directly infers both the HL and an unlimited number of horizontal VPs, even those extending beyond the image frame. The second key contribution of this thesis is a structure-enhanced VP detector. This method utilizes a multi-task learning framework to estimate multiple horizontal VPs from a single image. It goes beyond simple VP detection by generating masks that identify vertical planar structures corresponding to each VP, providing valuable scene layout information. Unlike existing methods, this approach leverages contextual information and scene structures for accurate estimation without relying on detected lines. Experimental results demonstrate that this method outperforms traditional line-based methods and modern deep learning-based methods. The thesis then explores the use of vanishing points for image matching and registration, particularly in cases where images are captured from vastly different viewpoints. Despite continuous progress in feature extractors and descriptors, these methods often fail in the presence of significant scale or viewpoint variations. The proposed methods address this challenge by incorporating vanishing points and scene structures. One major challenge in using vanishing points for registration is establishing reliable correspondences, especially in large-scale scenarios. This work addresses this challenge by proposing a vanishing point detection method aided by the detection of masks of vertical scene structures corresponding to these vanishing points. To our knowledge, this is the first implementation of a method for vanishing point matching that exploits image content rather than just detected segments. This vanishing point correspondence facilitates the estimation of the camera's relative rotation, particularly in large-scale scenarios. Additionally, incorporating information from scene structures enables more reliable keypoint correspondence within these structures. Consequently, the method facilitates the estimation of relative translation, which is itself constrained by the rotation derived from the vanishing points. The quality of rotation can sometimes be impacted by the imprecision of detected vanishing points. Therefore, we propose a vanishing point-guided image matching method that is much less sensitive to the accuracy of vanishing point detection
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Shahri, Mohammed. "Line Matching in a Wide-Baseline Stereoview." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1376951775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lakemond, Ruan. "Multiple camera management using wide baseline matching." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/37668/1/Ruan_Lakemond_Thesis.pdf.

Full text
Abstract:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
APA, Harvard, Vancouver, ISO, and other styles
4

Shao, Wei. "Identifying the shape collapse problem in large deformation image registration." Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/2276.

Full text
Abstract:
This thesis examines and identifies the problems of shape collapse in large deformation image registration. Shape collapse occurs in image registration when a region in the moving image is transformed into a set of near zero volume in the target image space. Shape collapse may occur when the moving image has a structure that is either missing or does not sufficiently overlap the corresponding structure in the target image. We state that shape collapse is a problem in image registration because it may lead to the following consequences: (1) Incorrect pointwise correspondence between different coordinate systems; (2) Incorrect automatic image segmentation; (3) Loss of functional signal. The above three disadvantages of registration with shape collapse are illustrated in detail using several examples with both real and phantom data. Shape collapse problem is common in image registration algorithms with large degrees of freedom such as many diffeomorphic image registration algorithms. This thesis proposes a shape collapse measurement algorithm to detect the regions of shape collapse after image registration in pairwise and group-wise registrations. We further compute the shape collapse for a whole population of pairwise transformations such as occurs when registering many images to a common atlas coordinate system. Experiments are presented using the SyN diffeomorphic image registration algorithm and diffeomorphic demons algorithm. We show that shape collapse exists in both of the two large deformation registration methods. We demonstrate how changing the input parameters to the SyN registration algorithm can mitigate the collapse image registration artifacts.
APA, Harvard, Vancouver, ISO, and other styles
5

Eiben, B. "Integration of biomechanical models into image registration in the presence of large deformations." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1476650/.

Full text
Abstract:
Prone-to-supine breast image registration has potential application in the fields of surgical and radiotherapy planning, and image guided interventions. However, breast image registration of three-dimensional images acquired in different patient positions is a challenging problem, due to large deformations induced to the soft breast tissue caused by the change in gravity loading. Biomechanical modelling is a promising tool to predict gravity induced deformations, however such simulations alone are unlikely to produce good alignment due to inter-patient variability and image acquisition related influences on the breast shape. This thesis presents a symmetric, biomechanical simulation based registration framework which aligns images in a central, stress-free configuration. Soft tissue is modelled as a neo-Hookean material and external forces are considered as the main source of deformation in the original images. The framework successively applies image derived forces directly into the unloading simulation in place of a subsequent image registration step. This results in a biomechanically constrained deformation. Using a finite difference scheme enables simulations to be performed directly in the image space. Motion constrained boundary conditions have been incorporated which can capture tangential motion of membranes and fasciae. The accuracy of the approach is assessed by measuring the target registration error (TRE) using nine prone MRI and supine CT image pairs, one prone-supine CT image pair, and four prone-supine MRI image pairs. The registration reduced the combined mean TRE for all clinical data sets from initially 69.7mm to 5.6mm. Prone-supine image pairs might not always be available in the clinical breast cancer workflow, especially prior to surgery. Hence an alternative surface driven registration methodology was also developed that incorporates biomechanical simulations, material parameter optimisation, and constrained surface matching. For three prone MR images and corresponding supine CT-derived surfaces a final mean TRE of 10.0mm was measured.
APA, Harvard, Vancouver, ISO, and other styles
6

Briand, Thibaud. "Image Formation from a Large Sequence of RAW Images : performance and accuracy." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1017/document.

Full text
Abstract:
Le but de cette thèse est de construire une image couleur de haute qualité, contenant un faible niveau de bruit et d'aliasing, à partir d'une grande séquence (e.g. des centaines) d'images RAW prises avec un appareil photo grand public. C’est un problème complexe nécessitant d'effectuer à la volée du dématriçage, du débruitage et de la super-résolution. Les algorithmes existants produisent des images de haute qualité, mais le nombre d'images d'entrée est limité par des coûts de calcul et de mémoire importants. Dans cette thèse, nous proposons un algorithme de fusion d'images qui les traite séquentiellement de sorte que le coût mémoire ne dépend que de la taille de l'image de sortie. Après un pré-traitement, les images mosaïquées sont recalées en utilisant une méthode en deux étapes que nous introduisons. Ensuite, une image couleur est calculée par accumulation des données irrégulièrement échantillonnées en utilisant une régression à noyau classique. Enfin, le flou introduit est supprimé en appliquant l'inverse du filtre équivalent asymptotique correspondant (que nous introduisons). Nous évaluons la performance et la précision de chaque étape de notre algorithme sur des données synthétiques et réelles. Nous montrons que pour une grande séquence d'images, notre méthode augmente avec succès la résolution et le bruit résiduel diminue comme prévu. Nos résultats sont similaires à des méthodes plus lentes et plus gourmandes en mémoire. Comme la génération de données nécessite une méthode d'interpolation, nous étudions également les méthodes d'interpolation par polynôme trigonométrique et B-spline. Nous déduisons de cette étude de nouvelles méthodes d'interpolation affinées
The aim of this thesis is to build a high-quality color image, containing a low level of noise and aliasing, from a large sequence (e.g. hundreds or thousands) of RAW images taken with a consumer camera. This is a challenging issue requiring to perform on the fly demosaicking, denoising and super-resolution. Existing algorithms produce high-quality images but the number of input images is limited by severe computational and memory costs. In this thesis we propose an image fusion algorithm that processes the images sequentially so that the memory cost only depends on the size of the output image. After a preprocessing step, the mosaicked (or CFA) images are aligned in a common system of coordinates using a two-step registration method that we introduce. Then, a color image is computed by accumulation of the irregularly sampled data using classical kernel regression. Finally, the blur introduced is removed by applying the inverse of the corresponding asymptotic equivalent filter (that we introduce).We evaluate the performance and the accuracy of each step of our algorithm on synthetic and real data. We find that for a large sequence of RAW images, our method successfully performs super-resolution and the residual noise decreases as expected. We obtained results similar to those obtained by slower and memory greedy methods. As generating synthetic data requires an interpolation method, we also study in detail the trigonometric polynomial and B-spline interpolation methods. We derive from this study new fine-tuned interpolation methods
APA, Harvard, Vancouver, ISO, and other styles
7

König, Lars [Verfasser]. "Matrix-free approaches for deformable image registration with large-scale and real-time applications in medical imaging / Lars König." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2019. http://d-nb.info/1175137189/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Matos, Ana Carolina Fonseca. "Development of a large baseline stereo vision rig for pedestrian and other target detection on road." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17055.

Full text
Abstract:
Mestrado em Engenharia Mecânica
Os veículos autónomos são uma tendência cada vez mais crescente nos dias de hoje com os grandes fabricantes da área automóvel, e não só, concentrados em desenvolver carros autónomos. As duas maiores vantagens que se destacam para os carros autónomos são maior conforto para o condutor e maior segurança, onde este trabalho se foca. São incontáveis as vezes que um condutor, por distração ou por outra razão, não vê um objeto na estrada e colide ou um peão na estrada que e atropelado. Esta e uma das questões que um sistema de apoio a condução (ADAS) ou um carro autónomo tenta solucionar e por ser uma questão tão relevante há cada vez mais investigação nesta área. Um dos sistemas mais usados para este tipo de aplicação são câmaras digitais, que fornecem informação muito completa sobre o meio circundante, para além de sistemas como sensores LIDAR, entre outros. Uma tendência que deriva desta e o uso de sistemas stereo, sistemas com duas câmaras, e neste contexto coloca-se uma pergunta a qual este trabalho tenta respoder: "qual e a distância ideal entre as câmaras num sistema stereo para deteção de objetos ou peões?". Esta tese apresenta todo o desenvolvimento de um sistema de visão stereo: desde o desenvolvimento de todo o software necessário para calcular a que distância estão peões e objetos usando duas câmaras até ao desenvolvimento de um sistema de xação das câmaras que permita o estudo da qualidade da deteção de peões para várias baselines. Foram realizadas experiências para estudar a influênci da baseline e da distância focal da lente que consistriam em gravar imagens com um peão em deslocamento a distâncias pré defenidas e marcadas no chão assim como um objeto xo, tudo em cenário exterior. A análise dos resultados foi feita comparando o valor calculado automáticamente pela aplicação com o valor medido. Conclui-se que com este sistema e com esta aplicação e possível detetar peões com exatidão razoável. No entanto, os melhores resultados foram obtidos para a baseline de 0.3m e para uma lente de 8mm.
Nowadays, autonomous vehicles are an increasing trend as the major players of this sector, and not only, are focused in developing autonomous cars. The two main advantages of autonomous cars are the higher convenience for the passengers and more safety for the passengers and for the people around, which is what this thesis focus on. Sometimes, due to distraction or another reasons, the driver does not see an object on the road and crash or a pedestrian in the cross walk and the person is run over. This is one of the questions that an ADAS or an autonomous car tries to solve and due to the huge relevance of this more research have been done in this area. One of the most applied systems for ADAS are digital cameras, that provide complex information about the surrounding environment, in addition to LIDAR sensor and others. Following this trend, the use of stereo vision systems is increasing - systems with two cameras, and in this context a question comes up: "what is the ideal distance between the cameras in a stereo system for object and pedestrian detection?". This thesis shows all the development of a stereo vision system: from the development of the necessary software for calculating the objects and pedestrians distance form the setup using two cameras, to the design of a xing system for the cameras that allows the study of stereo for di erent baselines. In order to study the in uence of the baseline and the focal distance a pedestrian, walking through previously marked positions, and a xed object, were recorded, in an exterior scenario. The results were analyzed by comparing the automatically calculated distance, using the application, with the real value measured. It was concluded, in the end, that the distance of pedestrians and objects can be calculated, with minimal error, using the software developed and the xing support system. However, the best results were achieved for the 0.3m baseline and for the 8mm lens.
APA, Harvard, Vancouver, ISO, and other styles
9

Chnafa, Christophe. "Using image-based large-eddy simulations to investigate the intracardiac flow and its turbulent nature." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20112/document.

Full text
Abstract:
Le premier objectif de cette thèse est de générer et d'analyser une base de données pour l'écoulement intracardiaque dans des géométries réalistes. Dans ce but, une stratégie couplant simulation numérique et imagerie médicale est appliquée à un cœur gauche pathologique et à un cœur gauche sain. Le second objectif est d'illustrer comment cette base de données peut être analysée afin de mieux comprendre l'écoulement intracardiaque, en portant une attention particulière aux caractéristiques instationnaires de l'écoulement et à sa nature turbulente. Une chaîne numérique pour simuler l'écoulement dans des géométries spécifiques au patient est tout d'abord présentée. La cavité cardiaque et ses mouvements sont extraits à partir d'images médicales à l'aide d'un algorithme de recalage d'image afin d'obtenir le domaine de calcul. Les équations qui régissent l'écoulement sont écrites dans le cadre d'un maillage se déformant au cours du temps (approche arbitrairement Lagrangienne ou Eulérienne). Les valves cardiaques sont modélisées à l'aide de frontières immergées. L'application de cette chaîne numérique à deux cœurs gauches, l'un pathologique, l'autre sain est ensuite détaillée. L'écoulement sanguin est caractérisé par sa nature transitoire, donnant un écoulement complexe et cyclique. Il est montré que l'écoulement n'est ni laminaire, ni pleinement turbulent, justifiant a posteriori l'utilisation de simulation aux grandes échelles. Le développement instationnaire de la turbulence est analysé à l'aide de l'écoulement moyenné sur un nombre suffisant de cycles cardiaques. Les statistiques de l'écoulement, l'énergie turbulente, la production de turbulence et une analyse spectrale sont notamment présentées. Une étude Lagrangienne est aussi effectuée en utilisant des statistiques calculées à l'aide de particules ensemencées dans l'écoulement. En plus des caractéristiques habituellement rapportées, ce travail met en évidence le caractère perturbé et transitoire de l'écoulement, tout en identifiant les mécanismes de production de la turbulence
The first objective of this thesis is to generate and analyse CFD-based databases for the intracardiac flow in realistic geometries. To this aim, an image-based CFD strategy is applied to both a pathological and a healthy human left hearts. The second objective is to illustrate how the numerical database can be analysed in order to gain insight about the intracardiac flow, mainly focusing on the unsteady and turbulent features. A numerical framework allowing insight in fluid dynamics inside patient-specific human hearts is first presented. The heart cavities and their wall dynamics are extracted from medical images, with the help of an image registration algorithm, in order to obtain a patient-specific moving numerical domain. Flow equations are written on a conformal moving computational domain, using an Arbitrary Lagrangian-Eulerian framework. Valves are modelled using immersed boundaries.Application of this framework to compute flow and turbulence statistics in both a realistic pathological and a realistic healthy human left hearts is presented. The blood flow is characterized by its transitional nature, resulting in a complex cyclic flow. Flow dynamics is analysed in order to reveal the main fluid phenomena and to obtain insights into the physiological patterns commonly detected. It is demonstrated that the flow is neither laminar nor fully turbulent, thus justifying a posteriori the use of Large Eddy Simulation.The unsteady development of turbulence is analysed from the phase averaged flow, flow statistics, the turbulent stresses, the turbulent kinetic energy, its production and through spectral analysis. A Lagrangian analysis is also presented using Lagrangian particles to gather statistical flow data. In addition to a number of classically reported features on the left heart flow, this work reveals how disturbed and transitional the flow is and describes the mechanisms of turbulence production
APA, Harvard, Vancouver, ISO, and other styles
10

Lotz, Johannes [Verfasser], Jan [Akademischer Betreuer] Modersitzki, and Heinz [Akademischer Betreuer] Handels. "Combined local and global image registration and its application to large-scale images in digital pathology / Johannes Lotz ; Akademische Betreuer: Jan Modersitzki, Heinz Handels." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2020. http://d-nb.info/1217024069/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sakamoto, Ryo. "Detection of Time-Varying Structures by Large Deformation Diffeomorphic Metric Mapping to Aid Reading of High-Resolution CT Images of the Lung." Kyoto University, 2014. http://hdl.handle.net/2433/189353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sabino, Danilo Damasceno. "Development of a 3D multi-camera measurement system based on image stitching techniques applied for dynamic measurements of large structures." Ilha Solteira, 2018. http://hdl.handle.net/11449/157103.

Full text
Abstract:
Orientador: João Antonio Pereira
Resumo: O objetivo específico deste trabalho é estender as capacidades da técnica de rastreamento de pontos em 3 dimensões (three-dimensional point tracking – 3DPT) para identificar as características dinâmicas de estruturas grandes e complexas, tais como pás de turbina eólica. Um sistema multi-camera (composto de múltiplos sistemas de estéreo visão calibrados independentemente) é desenvolvido para obter alta resolução espacial de pontos discretos a partir de medidas de deslocamento sobre grandes áreas. Uma proposta de técnica de costura é apresentada e empregada para executar o alinhamento de duas nuvens de pontos, obtidas com a técnica 3DPT, de uma estrutura sob excitação dinâmica. Três diferentes algoritmos de registro de nuvens de pontos são propostos para executar a junção das nuvens de pontos de cada sistema estéreo, análise de componentes principais (Principal Component Analysis - PCA), decomposição de valores singulares (Singular value Decomposition - SVD) e ponto mais próximo iterativo (Iterative Closest Point - ICP). Além disso, análise modal operacional em conjunto com o sistema de medição multi-camera e as técnicas de registro de nuvens de pontos são usadas para determinar a viabilidade de usar medidas ópticas (e.g. three-dimensional point tracking – 3DPT) para estimar os parâmetros modais de uma pá de gerador eólico comparando seus resultados com técnicas de medição mais convencionais.
Abstract: The specific objective of this research is to extend the capabilities of three-dimensional (3D) Point Tracking (PT) to identify the dynamic characteristics of large and complex structures, such as utility-scale wind turbine blades. A multi-camera system (composed of multiple independently calibrated stereovision systems) is developed to obtain high spatial resolution of discrete points from displacement measurement over very large areas. A proposal of stitching techniques is presented and employed to perform the alignment of two point clouds, obtained with 3DPT measurement, of a structure under dynamic excitation. The point cloud registration techniques are exploited as a technique for dynamic measuring (displacement) of large structures with high spatial resolution of the model. Three different image registration algorithms are proposed to perform the junction of the points clouds of each stereo system, Principal Component Analysis (PCA), Singular value Decomposition (SVD) and Iterative Closest Point (ICP). Furthermore, operational modal analysis in conjunction with the multi-camera measurement system and registration techniques are used to determine the feasibility of using optical measurements (e.g. three-dimensional point tracking (3DPT)) to estimate the modal parameters of a utility-scale wind turbine blade by comparing with traditional techniques.
Doutor
APA, Harvard, Vancouver, ISO, and other styles
13

MAGGIOLO, LUCA. "Deep Learning and Advanced Statistical Methods for Domain Adaptation and Classification of Remote Sensing Images." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1070050.

Full text
Abstract:
In the recent years, remote sensing has faced a huge evolution. The constantly growing availability of remote sensing data has opened up new opportunities and laid the foundations for many new challenges. The continuous space missions and new constellations of satellites allow in fact more and more frequent acquisitions, at increasingly higher spatial resolutions, and at an almost total coverage of the globe. The availability of such an huge amount data has highlighted the need for automatic techniques capable of processing the data and exploiting all the available information. Meanwhile, the almost unlimited potential of machine learning has changed the world we live in. Artificial neural Networks have break trough everyday life, with applications that include computer vision, speech processing, autonomous driving but which are also the basis of commonly used tools such as online search engines. However, the vast majority of such models are of the supervised type and therefore their applicability rely on the availability of an enormous quantity of labeled data available to train the models themselves. Unfortunately, this is not the case with remote sensing, in which the enormous amounts of data are opposed to the almost total absence of ground truth. The purpose of this thesis is to find the way to exploit the most recent deep learning techniques, defining a common thread between two worlds, those of remote sensing and deep learning, which is often missing. In particular, this thesis proposes three novel contributions which face current issues in remote sensing. The first one is related to multisensor image registration and combines generative adversarial networks and non-linear optimization of crosscorrelation-like functionals to deal with the complexity of the setting. The proposed method was proved able to outperform state of the art approaches. The second novel contribution faces one of the main issues in deep learning for remote sensing: the scarcity of ground truth data for semantic segmentation. The proposed solution combines convolutional neural networks and probabilistic graphical models, two very active areas in machine learning for remote sensing, and approximate a fully connected conditional random field. The proposed method is capable of filling part of the gap which separate a densely trained model from a weakly trained one. Then, the third approach is aimed at the classification of high resolution satellite images for climate change purposes. It consist of a specific formulation of an energy minimization which allows to fuse multisensor information and the application a markov random field in a fast and efficient way for global scale applications. The results obtained in this thesis shows how deep learning methods based on artificial neural networks can be combined with statistical analysis to overcome their limitations, going beyond the classic benchmark environments and addressing practical, real and large-scale application cases.
APA, Harvard, Vancouver, ISO, and other styles
14

González, Obando Daniel Felipe. "From digital to computational pathology for biomarker discovery." Electronic Thesis or Diss., Université Paris Cité, 2019. http://www.theses.fr/2019UNIP5185.

Full text
Abstract:
L'histopathologie a pour objectif d'analyser des images de tissus biologiques pour évaluer l’état pathologique d'un organe et établir un diagnostic. L'apparition des scanners de lames a haute résolution a ouvert la voie a des nouvelles possibilités d'acquisition de très grandes images (whole slide imaging), de multiplexage de marquages, d'extraction exhaustive d'informations visuelles et d'annotations multiples a large échelle. Cette thèse propose un ensemble de méthodes algorithmiques visant a faciliter et optimiser ces différents aspects. Dans un premier temps, nous proposons une méthode de recalage multiculturelle d'images histologiques multi-marquées reposant sur les propriétés des B-splines pour modéliser, de fawn continue, une image discrète. Nous proposons ensuite de nouvelles approches d'analyse morphologique sur des polygones faiblement simples, généralisés par des graphes a segments droits. Elles reposent sur le formalisme des squelettes droits (une approximation de squelettes courbes définis par des segments droits), construits a l'aide de graphes de motocyclettes. Cette structure permet de réaliser des opérations de morphologie mathématiques sur des polygones avec une complexité réduite. La précision des opérations sur des polygones bruites est obtenue en raffinant la construction des squelettes droits par ajout adaptatif de sommets. Nous avons aussi propose un algorithme de détection de l'axe médian et montre qu'il est possible de reconstruire la forme d'origine avec une approximation arbitraire. Enfin, nous avons explore les squelettes droits pondérés qui permettent des opérations morphologiques directionnelles. Ces approches d'analyse morphologique offrent un support consistant pour améliorer la segmentation des objets grâce a l'information contextuelle et réaliser des études liées a l'analyse spatiale des interactions entre les différentes structures d’intérêt au sein du tissu. Tous les algorithmes proposes sont optimises pour le traitement d'images gigapixels et garantissent une reproductibilité des analyses, notamment grâce a la création du plugin Icytomine, interface entre Icy et Cytomine
Histopathology aims to analyze images of biological tissues to assess the pathologi¬cal condition of an organ and to provide a diagnosis. The advent of high-resolution slide scanners has opened the door to new possibilities for acquiring very large im¬ages (whole slide imaging), multiplexing stainings, exhaustive extraction of visual information and large scale annotations. This thesis proposes a set of algorith¬mic methods aimed at facilitating and optimizing these different aspects. First, we propose a multi-scale registration method of multi-labeled histological images based on the properties of B-splines to model, in a continuous way, a discrete image. We then propose new approaches to perform morphological analysis on weakly simple polygons generalized by straight-line graphs. They are based on the formalism of straight skeletons (an approximation of curved skeletons defined by straight segments), built with the help of motorcycle graphs. This structure makes it possible to perform mathematical morphological operations on polygons. The precision of operations on noisy polygons is obtained by refining the construction of straight skeletons. We also propose an algorithm for computing the medial axis from straight skeletons, showing it is possible to approximate the original polygonal shape. Finally, we explore weighted straight skeletons that allow directional mor¬phological operations. These morphological analysis approaches provide consistent support for improving the segmentation of objects through contextual information and performing studies related to the spatial analysis of interactions between dif¬ferent structures of interest within the tissue. All the proposed algorithms are optimized to handle gigapixel images while assuring analysis reproducibility, in particular thanks to the creation of the Icytomine plugin, an interface between Icy and Cytomine
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Li-Chuan, and 王莉琄. "A Numerical Study On Large Deformation Diffeomorphic Metric Mapping With Application On Brain Image Registration." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/92472835935403425056.

Full text
Abstract:
碩士
國立交通大學
應用數學系所
102
Image registration in medical images analysis is to find a corresponding map via landmar- ks, p and q which is prescribed in two given images, respectively. LDDMM is one of the most commonly used methods for non-rigid medical image registration. Computation of LDDMM is an optimization problem. It is important to find a suitable initial for LDDM- M computation. The goal of this thesis is to find the suitable initial. In this thesis, the initial of computing LDDMM is obtained from the thin-plate spline and möbius transformation, instead of original initial path constructed by Marsland and Twining. We use following steps to construct the initial. First, we find p ̂ by applying möbius transformation on p in order to perform the affine registration. Next, we use thin-plate spline method to find a lin- ear path from p ̂ to q. Finally, a diffeomorphic map is constructed by LDDMM based on geodesic spline interpolation. The proceess of computing the initial is also demonstrated. To examine the initial path, the deformation fields obtained by computing the LDDMM with different initial are listed for comparison. At the end of the thesis, we apply the LDD- MM on brain image registration.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography