Thèses sur le sujet « 2D Images - 3D Models »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : 2D Images - 3D Models.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « 2D Images - 3D Models ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Zhang, Yan. « Feature-based automatic registration of images with 2D and 3D models ». Thesis, University of Central Lancashire, 2006. http://clok.uclan.ac.uk/21603/.

Texte intégral
Résumé :
Automatic image registration is the technique to align images in different coordinate systems to the same coordinate system which has found wide industrial applications for control automation and quality inspection. Focusing on the industrial applications where product models are available and transformations between models and images are global, this thesis presents the research works on two registration problems based on different features and different transformation models. The first image registration problem is a 2D/2D one with a 2D similarity transformation and based on geometric primitives selected from models and extracted from images. Featured-based methods using geometric primitives like point, line segment and circle have been widely studied. This thesis proposes a number of novel registration methods based on elliptic features, which include a point matching algorithm based on local search method for ellipse correspondence search and rough pose estimation, a numerical approach to refine the estimation result by using the non-overlapping area ratio (NAR) of corresponding ellipses and an elliptic are matching algorithm based on integral of squared distances (JSD) between points on corresponding arcs. The major advantage of JSD is that its optimal solution can be obtained analytically, which makes it applicable to efficient elliptic arc correspondence search. The second image registration problem is a 3D/2D one with an orthographic projection transformation and based on silhouette features. A novel algorithm has been developed and presented in this thesis based on a 3D triangular-mesh model, which can be applied to approximate a de facto NURBS model, and images in which silhouette features can be extracted. The algorithm consists of a rough pose estimation process with shape comparison methods and a pose refinement process with 3D/2D iterative closest point (ICP) method. The computer simulation results show that the algorithm can perform very effective and efficient 3D/2D registration.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Stebbing, Richard. « Model-based segmentation methods for analysis of 2D and 3D ultrasound images and sequences ». Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f0e855ca-5ed9-4e40-994c-9b470d5594bf.

Texte intégral
Résumé :
This thesis describes extensions to 2D and 3D model-based segmentation algorithms for the analysis of ultrasound images and sequences. Starting from a common 2D+t "track-to-last" algorithm, it is shown that the typical method of searching for boundary candidates perpendicular to the model contour is unnecessary if, for each boundary candidate, its corresponding position on the model contour is optimised jointly with the model contour geometry. With this observation, two 2D+t segmentation algorithms, which accurately recover boundary displacements and are capable of segmenting arbitrarily long sequences, are formulated and validated. Generalising to 3D, subdivision surfaces are shown to be natural choices for continuous model surfaces, and the algorithms necessary for joint optimisation of the correspondences and model surface geometry are described. Three applications of 3D model-based segmentation for ultrasound image analysis are subsequently presented and assessed: skull segmentation for fetal brain image analysis; face segmentation for shape analysis, and single-frame left ventricle (LV) segmentation from echocardiography images for volume measurement. A framework to perform model-based segmentation of multiple 3D sequences - while jointly optimising an underlying linear basis shape model - is subsequently presented for the challenging application of right ventricle (RV) segmentation from 3D+t echocardiography sequences. Finally, an algorithm to automatically select boundary candidates independent of a model surface estimate is described and presented for the task of LV segmentation. Although motivated by challenges in ultrasound image analysis, the conceptual contributions of this thesis are general and applicable to model-based segmentation problems in many domains. Moreover, the components are modular, enabling straightforward construction of application-specific formulations for new clinical problems as they arise in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
3

López, Picazo Mirella. « 3D subject-specific shape and density modeling of the lumbar spine from 2D DXA images for osteoporosis assessment ». Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/666513.

Texte intégral
Résumé :
Osteoporosis is the most common bone disease, with a significant morbidity and mortality caused by the increase of bone fragility and susceptibility to fracture. Dual Energy X-ray Absorptiometry (DXA) is the gold standard technique for osteoporosis and fracture risk evaluation at the spine. However, the standard analysis of DXA images only provides 2D measurements and does not differentiate between bone compartments; neither specifically assess bone density in the vertebral body, which is where most of the osteoporotic fractures occur. Quantitative Computed Tomography (QCT) is an alternative technique that overcomes limitations of DXA-based diagnosis. However, due to the high cost and radiation dose, QCT is not used for osteoporosis management. In this thesis, a method providing a 3D subject-specific shape and density estimation of the lumbar spine from a single anteroposterior DXA image is proposed. The method is based on a 3D statistical shape and density model built from a training set of QCT scans. The 3D subject-specific shape and density estimation is obtained by registering and fitting the statistical model onto the DXA image. Cortical and trabecular bone compartments are segmented using a model-based algorithm. 3D measurements are performed at different vertebral regions and bone compartments. The accuracy of the proposed methods is evaluated by comparing DXA-derived to QCT-derived 3D measurements. Two case-control studies are also performed: a retrospective study evaluating the ability of DXA-derived 3D measurements at lumbar spine to discriminate between osteoporosis-related vertebral fractures and control groups; and a study evaluating the association between DXA-derived 3D measurements at lumbar spine and osteoporosis-related hip fractures. In both studies, stronger associations are found between osteoporosis-related fractures and DXA-derived 3D measurements compared to standard 2D measurements. The technology developed within this thesis offers an insightful 3D analysis of the lumbar spine, which could potentially improve osteoporosis and fracture risk assessment in patients who had a standard DXA scan of the lumbar spine without any additional examination.
La osteoporosis es la enfermedad ósea más común, con una morbilidad y mortalidad significativas causadas por el aumento de la fragilidad ósea y la susceptibilidad a las fracturas. La absorciometría de rayos X de energía dual (DXA, por sus siglas en inglés) es la técnica de referencia para la evaluación de la osteoporosis y del riesgo de fracturas en la columna vertebral. Sin embargo, el análisis estándar de las imágenes DXA solo proporciona mediciones 2D y no diferencia entre los compartimentos óseos; tampoco evalúa la densidad ósea en el cuerpo vertebral, que es donde se producen la mayoría de las fracturas osteoporóticas. La tomografía computarizada cuantitativa (QCT, por sus siglas en inglés) es una técnica alternativa que supera las limitaciones del diagnóstico basado en DXA. Sin embargo, debido al alto costo y la dosis de radiación, la QCT no se usa para el diagnóstico de la osteoporosis. En esta tesis, se propone un método que proporciona una estimación personalizada de la forma 3D y la densidad de la columna vertebral en la zona lumbar a partir de una única imagen DXA anteroposterior. El método se basa en un modelo estadístico 3D de forma y densidad creado a partir de un conjunto de entrenamiento de exploraciones QCT. La estimación 3D personalizada de forma y densidad se obtiene al registrar y ajustar el modelo estadístico con la imagen DXA. Se segmentan los compartimentos óseos corticales y trabeculares utilizando un algoritmo basado en modelos. Se realizan mediciones 3D en diferentes regiones vertebrales y compartimentos óseos. La precisión de los métodos propuestos se evalúa comparando las mediciones 3D derivadas de DXA con las derivadas de QCT. También se realizan dos estudios de casos y controles: un estudio retrospectivo que evalúa la capacidad de las mediciones 3D derivadas de DXA en la columna lumbar para discriminar entre sujetos con fracturas vertebrales relacionadas con la osteoporosis y sujetos control; y un estudio que evalúa la asociación entre las mediciones 3D derivadas de DXA en la columna lumbar y las fracturas de cadera relacionadas con la osteoporosis. En ambos estudios, se encuentran asociaciones más fuertes entre las fracturas relacionadas con la osteoporosis y las mediciones 3D derivadas de DXA en comparación con las mediciones estándar 2D. La tecnología desarrollada dentro de esta tesis ofrece un análisis en 3D de la columna lumbar, que podría mejorar la evaluación de la osteoporosis y el riesgo de fractura en pacientes que se sometieron a una exploración DXA estándar de la columna lumbar sin ningún examen adicional.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wasswa, William. « 3D approximation of scapula bone shape from 2D X-ray images using landmark-constrained statistical shape model fitting ». Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/23777.

Texte intégral
Résumé :
Two-dimensional X-ray imaging is the dominant imaging modality in low-resource countries despite the existence of three-dimensional (3D) imaging modalities. This is because fewer hospitals in low-resource countries can afford the 3D imaging systems as their acquisition and operation costs are higher. However, 3D images are desirable in a range of clinical applications, for example surgical planning. The aim of this research was to develop a tool for 3D approximation of scapula bone from 2D X-ray images using landmark-constrained statistical shape model fitting. First, X-ray stereophotogrammetry was used to reconstruct the 3D coordinates of points located on 2D X-ray images of the scapula, acquired from two perspectives. A suitable calibration frame was used to map the image coordinates to their corresponding 3D realworld coordinates. The 3D point localization yielded average errors of (0.14, 0.07, 0.04) mm in the X, Y and Z coordinates respectively, and an absolute reconstruction error of 0.19 mm. The second phase assessed the reproducibility of the scapula landmarks reported by Ohl et al. (2010) and Borotikar et al. (2015). Only three (the inferior angle, acromion and the coracoid process) of the eight reproducible landmarks considered were selected as these were identifiable from the two different perspectives required for X-ray stereophotogrammetry in this project. For the last phase, an approximation of a scapula was produced with the aid of a statistical shape model (SSM) built from a training dataset of 84 CT scapulae. This involved constraining an SSM to the 3D reconstructed coordinates of the selected reproducible landmarks from 2D X-ray images. Comparison of the approximate model with a CT-derived ground truth 3D segmented volume resulted in surface-to-surface average distances of 4.28 mm and 3.20 mm, using three and sixteen landmarks respectively. Hence, increasing the number of landmarks produces a posterior model that makes better predictions of patientspecific reconstructions. An average Euclidean distance of 1.35 mm was obtained between the three selected landmarks on the approximation and the corresponding landmarks on the CT image. Conversely, a Euclidean distance of 5.99 mm was obtained between the three selected landmarks on the original SSM and corresponding landmarks on the CT image. The Euclidean distances confirm that a posterior model moves closer to the CT image, hence it reduces the search space for a more exact patient-specific 3D reconstruction by other fitting algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Karlsson, Edlund Patrick. « Methods and models for 2D and 3D image analysis in microscopy, in particular for the study of muscle cells ». Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9201.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Hua, Xiaoben, et Yuxia Yang. « A Fusion Model For Enhancement of Range Images ». Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2203.

Texte intégral
Résumé :
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology.
Room 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ben, Abdallah Hamdi. « Inspection d'assemblages aéronautiques par vision 2D/3D en exploitant la maquette numérique et la pose estimée en temps réel Three-dimensional point cloud analysis for automatic inspection of complex aeronautical mechanical assemblies Automatic inspection of aeronautical mechanical assemblies by matching the 3D CAD model and real 2D images ». Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2020. http://www.theses.fr/2020EMAC0001.

Texte intégral
Résumé :
Cette thèse s'inscrit dans le contexte du développement d'outils numériques innovants au service de ce qui est communément désigné par Usine du Futur. Nos travaux de recherche ont été menés dans le cadre du laboratoire de recherche commun "Inspection 4.0" entre IMT Mines Albi/ICA et la Sté DIOTA spécialisée dans le développement d'outils numériques pour l'Industrie 4.0. Dans cette thèse, nous nous sommes intéressés au développement de systèmes exploitant des images 2D ou des nuages de points 3D pour l'inspection automatique d'assemblages mécaniques aéronautiques complexes (typiquement un moteur d'avion). Nous disposons du modèle CAO de l'assemblage (aussi désigné par maquette numérique) et il s'agit de vérifier que l'assemblage a été correctement assemblé, i.e que tous les éléments constituant l'assemblage sont présents, dans la bonne position et à la bonne place. La maquette numérique sert de référence. Nous avons développé deux scénarios d'inspection qui exploitent les moyens d'inspection développés par DIOTA : (1) un scénario basé sur une tablette équipée d'une caméra, portée par un opérateur pour un contrôle interactif temps-réel, (2) un scénario basé sur un robot équipé de capteurs (deux caméras et un scanner 3D) pour un contrôle totalement automatique. Dans les deux scénarios, une caméra dite de localisation fournit en temps-réel la pose entre le modèle CAO et les capteurs mis en œuvre (ce qui permet de relier directement la maquette numérique 3D avec les images 2D ou les nuages de points 3D analysés). Nous avons d'abord développé des méthodes d'inspection 2D, basées uniquement sur l'analyse d'images 2D puis, pour certains types d'inspection qui ne pouvaient pas être réalisés à partir d'images 2D (typiquement nécessitant la mesure de distances 3D), nous avons développé des méthodes d'inspection 3D basées sur l'analyse de nuages de points 3D. Pour l'inspection 3D de câbles électriques présents au sein de l'assemblage, nous avons proposé une méthode originale de segmentation 3D des câbles. Nous avons aussi traité la problématique de choix automatique de point de vue qui permet de positionner le capteur d'inspection dans une position d'observation optimale. Les méthodes développées ont été validées sur de nombreux cas industriels. Certains des algorithmes d’inspection développés durant cette thèse ont été intégrés dans le logiciel DIOTA Inspect© et sont utilisés quotidiennement chez les clients de DIOTA pour réaliser des inspections sur site industriel
This thesis makes part of a research aimed towards innovative digital tools for the service of what is commonly referred to as Factory of the Future. Our work was conducted in the scope of the joint research laboratory "Inspection 4.0" founded by IMT Mines Albi/ICA and the company DIOTA specialized in the development of numerical tools for Industry 4.0. In the thesis, we were interested in the development of systems exploiting 2D images or (and) 3D point clouds for the automatic inspection of complex aeronautical mechanical assemblies (typically an aircraft engine). The CAD (Computer Aided Design) model of the assembly is at our disposal and our task is to verify that the assembly has been correctly assembled, i.e. that all the elements constituting the assembly are present in the right position and at the right place. The CAD model serves as a reference. We have developed two inspection scenarios that exploit the inspection systems designed and implemented by DIOTA: (1) a scenario based on a tablet equipped with a camera, carried by a human operator for real-time interactive control, (2) a scenario based on a robot equipped with sensors (two cameras and a 3D scanner) for fully automatic control. In both scenarios, a so-called localisation camera provides in real-time the pose between the CAD model and the sensors (which allows to directly link the 3D digital model with the 2D images or the 3D point clouds analysed). We first developed 2D inspection methods, based solely on the analysis of 2D images. Then, for certain types of inspection that could not be performed by using 2D images only (typically requiring the measurement of 3D distances), we developed 3D inspection methods based on the analysis of 3D point clouds. For the 3D inspection of electrical cables, we proposed an original method for segmenting a cable within a point cloud. We have also tackled the problem of automatic selection of best view point, which allows the inspection sensor to be placed in an optimal observation position. The developed methods have been validated on many industrial cases. Some of the inspection algorithms developed during this thesis have been integrated into the DIOTA Inspect© software and are used daily by DIOTA's customers to perform inspections on industrial sites
Styles APA, Harvard, Vancouver, ISO, etc.
8

Truong, Michael Vi Nguyen. « 2D-3D registration of cardiac images ». Thesis, King's College London (University of London), 2014. https://kclpure.kcl.ac.uk/portal/en/theses/2d3d-registration-of-cardiac-images(afef93e6-228c-4bc7-aab0-94f1e1ecf006).html.

Texte intégral
Résumé :
This thesis describes two novel catheter-based 2D-3D cardiac image registration algorithms for overlaying preoperative 3D MR or CT data onto intraoperative fluoroscopy, and fusing electroanatomical data onto clinical images. The work is intended for use in cardiac catheterisation procedures. To fulfil this objective, the algorithms must be accurate, robust and minimally disruptive to the clinical workflow. The first algorithm relies on the catheterisation of vessels of the heart and registers by minimising a vessel-radius-weighted distance between the catheters and corresponding vessel centrelines. A novelty here is a global-fit search strategy that considers all vessel branches during registration, adding robustness and avoiding manual branch selection. Another contribution to knowledge is an analysis of catheter configurations for registration. Results show that accuracy is highly dependent on the catheter configuration, and that using a coronary vessel (CV) with the aorta (Ao) was most accurate, yielding mean 3D target registration errors (TRE) between 0.55 and 7.0 mm with phantom data. Using two large-diameter vessels was least accurate, with TRE between 10 and 43 mm, and should be avoided. When applied to clinical data, registrations with the CV/Ao configuration resulted an estimated mean 2D-TRE of 5.9 mm, on average. The second 2D-3D registration algorithm extends the novelty of exploring catheter configurations by registering using catheters looped inside chambers of the heart. In phantom experiments, two-view registration yielded an average accuracy of 4.0 mm 3D-TRE (7.8-mm capture range). Using a single view, average reprojection distance was 2.7 mm (6.0-mm capture range). Application of the algorithm to a clinical dataset resulted in an estimated average 2D-TRE of 10 mm. Single view registrations are ideal when biplane X-ray acquisition is undesirable and for correcting bulk patient motion. In current practice, registration is performed manually. The algorithms in this thesis can register with comparable accuracy to manual registration, but are automated and can therefore fit better with the clinical workflow.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Jones, Jonathan-Lee. « 2D and 3D segmentation of medical images ». Thesis, Swansea University, 2015. https://cronfa.swan.ac.uk/Record/cronfa42504.

Texte intégral
Résumé :
Cardiovascular disease is one of the leading causes of the morbidity and mortality in the western world today. Many different imaging modalities are in place today to diagnose and investigate cardiovascular diseases. Each of these, however, has strengths and weaknesses. There are different forms of noise and artifacts in each image modality that combine to make the field of medical image analysis both important and challenging. The aim of this thesis is develop a reliable method for segmentation of vessel structures in medical imaging, combining the expert knowledge of the user in such a way as to maintain efficiency whilst overcoming the inherent noise and artifacts present in the images. We present results from 2D segmentation techniques using different methodologies, before developing 3D techniques for segmenting vessel shape from a series of images. The main drive of the work involves the investigation of medical images obtained using catheter based techniques, namely Intra Vascular Ultrasound (IVUS) and Optical Coherence Tomography (OCT). We will present a robust segmentation paradigm, combining both edge and region information to segment the media-adventitia, and lumenal borders in those modalities respectively. By using a semi-interactive method that utilizes "soft" constraints, allowing imprecise user input which provides a balance between using the user's expert knowledge and efficiency. In the later part of the work, we develop automatic methods for segmenting the walls of lymph vessels. These methods are employed on sequential images in order to obtain data to reconstruct the vessel walls in the region of the lymph valves. We investigated methods to segment the vessel walls both individually and simultaneously, and compared the results both quantitatively and qualitatively in order obtain the most appropriate for the 3D reconstruction of the vessel wall. Lastly, we adapt the semi-interactive method used on vessels earlier into 3D to help segment out the lymph valve. This involved the user interactive method to provide guidance to help segment the boundary of the lymph vessel, then we apply a minimal surface segmentation methodology to provide segmentation of the valve.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Liu, Jianxin. « A porosity-based model for coupled thermal-hydraulic-mechanical processes ». University of Western Australia. Centre for Petroleum, Fuels and Energy, 2010. http://theses.library.uwa.edu.au/adt-WU2010.0113.

Texte intégral
Résumé :
[Truncated abstract] Rocks, as the host to natural chains of coupled thermal, hydraulic and mechanical processes, are heterogeneous at a variety of length scales, and in their mechanical properties, as well as in the hydraulic and thermal transport properties. Rock heterogeneity affects the ultimate hydro-carbon recovery or geothermal energy production. This heterogeneity has been considered one important and difficult problem that needs to be taken into account for its effect on the coupled processes. The aim of this thesis is to investigate the effect of rock heterogeneity on multi-physical processes. A fully coupled finite element model, hereinafter referred to as a porosity-based model (PBM) was developed to characterise the thermal-hydraulic-mechanical (THM) coupling processes. The development of the PBM consists of a two-staged workflow. First, based on poromechanics, porosity, one of the inherent rock properties, was derived as a variant function of the thermal, hydraulic and mechanical effects. Then, empirical relations or experimental results, correlating porosity with the mechanical, hydraulic and thermal properties, were incorporated as the coupling effects. In the PBM, the bulk volume of the model is assumed to be changeable. The rate of the volumetric strain was derived as the difference of two parts: the first part is the change in volume per unit of volume and per unit of time (this part was traditionally considered the rate of volumetric strain); and the second is the product of the first part and the volumetric strain. The second part makes the PBM a significant advancement of the models reported in the literature. ... impact of the rock heterogeneity on the hydro-mechanical responses because of the requirement of large memory and long central processing unit (CPU) time for the 3D applications. In the 2D PBM applications, as the thermal boundary condition applied to the rock samples containing some fractures, the pore pressure is generated by the thermal gradient. Some pore pressure islands can be generated as the statistical model and the digital image model are applied to characterise the initial porosity distribution. However, by using the homogeneous model, this phenomenon cannot be produced. In the 3D PBM applications, the existing fractures become the preferential paths for the fluid flowing inside the numerical model. The numerical results show that the PBM is sufficiently reliable to account for the rock mineral distribution in the hydro-mechanical coupling processes. The applications of the statistical method and the digital image processing technique make it possible to visualise the rock heterogeneity effect on the pore pressure distribution and the heat dissipation inside the rock model. Monitoring the fluid flux demonstrates the impact of the rock heterogeneity on the fluid product, which concerns petroleum engineering. The overall fluid flux (OFF) is mostly overestimated when the rock and fluid properties are assumed to be homogeneous. The 3D PBM application is an example. As the rock is heterogeneous, the OFF by the digital core is almost the same as that by the homogeneous model (this is due to that some fractures running through the digital core become the preferential path for the fluid flow), and around 1.5 times of that by the statistical model.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Huang, Hui. « Efficient reconstruction of 2D images and 3D surfaces ». Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2821.

Texte intégral
Résumé :
The goal of this thesis is to gain a deep understanding of inverse problems arising from 2D image and 3D surface reconstruction, and to design effective techniques for solving them. Both computational and theoretical issues are studied and efficient numerical algorithms are proposed. The first part of this thesis is concerned with the recovery of 2D images, e.g., de-noising and de-blurring. We first consider implicit methods that involve solving linear systems at each iteration. An adaptive Huber regularization functional is used to select the most reasonable model and a global convergence result for lagged diffusivity is proved. Two mechanisms---multilevel continuation and multigrid preconditioning---are proposed to improve efficiency for large-scale problems. Next, explicit methods involving the construction of an artificial time-dependent differential equation model followed by forward Euler discretization are analyzed. A rapid, adaptive scheme is then proposed, and additional hybrid algorithms are designed to improve the quality of such processes. We also devise methods for more challenging cases, such as recapturing texture from a noisy input and de-blurring an image in the presence of significant noise. It is well-known that extending image processing methods to 3D triangular surface meshes is far from trivial or automatic. In the second part of this thesis we discuss techniques for faithfully reconstructing such surface models with different features. Some models contain a lot of small yet visually meaningful details, and typically require very fine meshes to represent them well; others consist of large flat regions, long sharp edges (creases) and distinct corners, and the meshes required for their representation can often be much coarser. All of these models may be sampled very irregularly. For models of the first class, we methodically develop a fast multiscale anisotropic Laplacian (MSAL) smoothing algorithm. To reconstruct a piecewise smooth CAD-like model in the second class, we design an efficient hybrid algorithm based on specific vertex classification, which combines K-means clustering and geometric a priori information. Hence, we have a set of algorithms that efficiently handle smoothing and regularization of meshes large and small in a variety of situations.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Henrichsen, Arne. « 3D reconstruction and camera calibration from 2D images ». Master's thesis, University of Cape Town, 2000. http://hdl.handle.net/11427/9725.

Texte intégral
Résumé :
Includes bibliographical references.
A 3D reconstruction technique from stereo images is presented that needs minimal intervention from the user. The reconstruction problem consists of three steps, each of which is equivalent to the estimation of a specific geometry group. The first step is the estimation of the epipolar geometry that exists between the stereo image pair, a process involving feature matching in both images. The second step estimates the affine geometry, a process of finding a special plane in projective space by means of vanishing points. Camera calibration forms part of the third step in obtaining the metric geometry, from which it is possible to obtain a 3D model of the scene. The advantage of this system is that the stereo images do not need to be calibrated in order to obtain a reconstruction. Results for both the camera calibration and reconstruction are presented to verify that it is possible to obtain a 3D model directly from features in the images.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Bowden, Nathan Charles. « Camera based texture mapping : 3D applications for 2D images ». Texas A&M University, 2005. http://hdl.handle.net/1969.1/2407.

Texte intégral
Résumé :
This artist??s area of research is the appropriate use of matte paintings within the context of completely computer generated films. The emphasis of research is the adaptation of analog techniques and paradigms into a digital production workspace. The purpose of this artist??s research is the development of an original method of parenting perspective projections to three-dimensional (3D) cameras, specifically tailored to result in 3D matte paintings. Research includes the demonstration of techniques combining two-dimensional (2D) paintings, 3D props and sets, as well as camera projections onto primitive geometry to achieve a convincing final composite.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Allalou, Amin. « Methods for 2D and 3D Quantitative Microscopy of Biological Samples ». Doctoral thesis, Uppsala universitet, Centrum för bildanalys, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-159196.

Texte intégral
Résumé :
New microscopy techniques are continuously developed, resulting in more rapid acquisition of large amounts of data. Manual analysis of such data is extremely time-consuming and many features are difficult to quantify without the aid of a computer. But with automated image analysis biologists can extract quantitative measurements and increases throughput significantly, which becomes particularly important in high-throughput screening (HTS). This thesis addresses automation of traditional analysis of cell data as well as automation of both image capture and analysis in zebrafish high-throughput screening.  It is common in microscopy images to stain the nuclei in the cells, and to label the DNA and proteins in different ways. Padlock-probing and proximity ligation are highly specific detection methods that  produce point-like signals within the cells. Accurate signal detection and segmentation is often a key step in analysis of these types of images. Cells in a sample will always show some degree of variation in DNA and protein expression and to quantify these variations each cell has to be analyzed individually. This thesis presents development and evaluation of single cell analysis on a range of different types of image data. In addition, we present a novel method for signal detection in three dimensions.  HTS systems often use a combination of microscopy and image analysis to analyze cell-based samples. However, many diseases and biological pathways can be better studied in whole animals, particularly those that involve organ systems and multi-cellular interactions. The zebrafish is a widely-used vertebrate model of human organ function and development. Our collaborators have developed a high-throughput platform for cellular-resolution in vivo chemical and genetic screens on zebrafish larvae. This thesis presents improvements to the system, including accurate positioning of the fish which incorporates methods for detecting regions of interest, making the system fully automatic. Furthermore, the thesis describes a novel high-throughput tomography system for screening live zebrafish in both fluorescence and bright field microscopy. This 3D imaging approach combined with automatic quantification of morphological changes enables previously intractable high-throughput screening of vertebrate model organisms.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Allouch, Yair. « Multi scale geometric segmentation on 2D and 3D Digital Images / ». [Beer Sheva] : Ben Gurion University of the Negev, 2007. http://aranne5.lib.ad.bgu.ac.il/others/AlloucheYair.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Dowell, Rachel J. (Rachel Jean). « Registration of 2D ultrasound images in preparation for 3D reconstruction ». Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10181.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Cheng, Yuan 1971. « 3D reconstruction from 2D images and applications to cell cytoskeleton ». Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88870.

Texte intégral
Résumé :
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, February 2001.
Includes bibliographical references (leaves 121-129).
Approaches to achieve three dimensional (3D) reconstruction from 2D images can be grouped into two categories: computer-vision-based reconstruction and tomographic reconstruction. By exploring both the differences and connections between these two types of reconstruction, the thesis attempts to develop a new technique that can be applied to 3D reconstruction of biological structures. Specific attention is given to the reconstruction of the cell cytoskeleton from electron microscope images. The thesis is composed of two parts. The first part studies computer-vision-based reconstruction methods that extract 3D information from geometric relationship among images. First, a multiple-feature-based stereo reconstruction algorithm that recovers the 3D structure of an object from two images is presented. A volumetric reconstruction method is then developed by extending the algorithm to multiple images. The method integrates a sequence of 3D reconstruction from different stereo pairs. It achieves a globally optimized reconstruction by evaluating certainty values of each stereo reconstruction. This method is tuned and applied to 3D reconstruction of the cell cytoskeleton. Feasibility, reliability and flexibility of the method are explored.
(cont.) The second part of the thesis focuses on a special tomographic reconstruction, discrete tomography, where the object to be reconstructed is composed of a discrete set of materials each with uniform values. A Bayesian labeling process is proposed as a framework for discrete tomography. The process uses an expectation-maximization (EM) algorithm with which the reconstruction is obtained efficiently. Results demonstrate that the proposed algorithm achieves high reconstruction quality even with a small number of projections. An interesting relationship between discrete tomography and conventional tomography is also derived, showing that discrete tomography is a more generalized form of tomography and conventional tomography is only a special case of such generalization.
by Yuan Cheng.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Mertzanidou, T. « Automatic correspondence between 2D and 3D images of the breast ». Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1362435/.

Texte intégral
Résumé :
Radiologists often need to localise corresponding findings in different images of the breast, such as Magnetic Resonance Images and X-ray mammograms. However, this is a difficult task, as one is a volume and the other a projection image. In addition, the appearance of breast tissue structure can vary significantly between them. Some breast regions are often obscured in an X-ray, due to its projective nature and the superimposition of normal glandular tissue. Automatically determining correspondences between the two modalities could assist radiologists in the detection, diagnosis and surgical planning of breast cancer. This thesis addresses the problems associated with the automatic alignment of 3D and 2D breast images and presents a generic framework for registration that uses the structures within the breast for alignment, rather than surrogates based on the breast outline or nipple position. The proposed algorithm can adapt to incorporate different types of transformation models, in order to capture the breast deformation between modalities. The framework was validated on clinical MRI and X-ray mammography cases using both simple geometrical models, such as the affine, and also more complex ones that are based on biomechanical simulations. The results showed that the proposed framework with the affine transformation model can provide clinically useful accuracy (13.1mm when tested on 113 registration tasks). The biomechanical transformation models provided further improvement when applied on a smaller dataset. Our technique was also tested on determining corresponding findings in multiple X-ray images (i.e. temporal or CC to MLO) for a given subject using the 3D information provided by the MRI. Quantitative results showed that this approach outperforms 2D transformation models that are typically used for this task. The results indicate that this pipeline has the potential to provide a clinically useful tool for radiologists.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Phan, Tan Binh. « On the 3D hollow organ cartography using 2D endoscopic images ». Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0135.

Texte intégral
Résumé :
Les algorithmes de « Structure from motion » (SfM, structure reconstituée à l’aide du mouvement) représentent un moyen efficace de construction de surfaces 3D étendues à partir des images d'une scène acquise sous différents points de vue. Ces algorithmes déterminent simultanément le mouvement de la caméra et un nuage de points 3D se trouvant à la surface des objets à reconstruire. Les algorithmes SfM classiques utilisent des méthodes de détection et de mise en correspondance de points caractéristiques pour poursuivre les points homologues à travers les séquences d'images, chaque ensemble de points homologues correspondant à un point 3D à reconstruire. Les algorithmes SfM exploitent les correspondances entre des points homologues pour trouver la structure 3D de la scène et les poses successives de la caméra dans un repère monde arbitraire. Il existe différents algorithmes SfM de référence qui peuvent reconstruire efficacement différents types de scènes lorsque les images comportent suffisamment de textures ou de structures. Cependant, la plupart des solutions existantes ne sont pas appropriées, ou du moins pas optimales, lorsque les séquences d'images contiennent peu de textures. Cette thèse propose deux solutions de type SfM basées sur un flot optique dense pour reconstruire des scènes complexes à partir d’une séquence d’images avec peu de textures et acquises sous des conditions d'éclairage changeantes. Il est notamment montré comment un flot optique précis peut être utilisé de manière optimale grâce à une stratégie de sélection d'images qui maximise le nombre et la taille des groupes de points homologues tout en minimisant les erreurs de localisation des points homologues. La précision des méthodes de cartographie 3D est évaluée sur des fantômes avec des dimensions connues. L’intérêt et la robustesse des méthodes sont démontrés sur des scènes médicales complexes en utilisant un jeu de valeurs constantes pour les paramètres des algorithmes. Les solutions proposées ont permis de reconstruire des organes observés dans différents examens (surface épithéliale de la paroi interne de l'estomac, surface épithéliale interne de la vessie et surface de la peau en dermatologie) et dans diverses modalités (lumière blanche pour tous les examens, lumière vert-bleu en gastroscopie et fluorescence en cystoscopie)
Structure from motion (SfM) algorithms represent an efficient means to construct extended 3D surfaces using images of a scene acquired from different viewpoints. SfM methods simultaneously determine the camera motion and a 3D point cloud lying on the surfaces to be recovered. Classical SfM algorithms use feature point detection and matching methods to track homologous points across the image sequences, each point track corresponding to a 3D point to be reconstructed. The SfM algorithms exploit the correspondences between homologous points to recover the 3D scene structure and the successive camera poses in an arbitrary world coordinate system. There exist different state-of-the-art SfM algorithms which can efficiently reconstruct different types of scenes, under the condition that the images include enough textures or structures. However, most of the existing solutions are inappropriate, or at least not optimal, when the sequences of images are without or only with few textures. This thesis proposes two dense optical flow (DOF)-based SfM solutions to reconstruct complex scenes using images with few textures and acquired under changing illumination conditions. It is notably shown how accurate DOF fields can be optimally used due to an image selection strategy which both maximizes the number and size of homologous point sets, and minimizes the errors in the homologous point localization. The accuracy of the proposed 3D cartography methods is assessed on phantoms with known dimensions. The robustness and the interest of the proposed methods are demonstrated on various complex medical scenes using a constant algorithm parameter set. The proposed solutions reconstructed organs seen in different medical examinations (epithelial surface of the inner stomach wall, inner epithelial bladder surface, and the skin surface in dermatology) and various imaging modalities (white light for all examinations, green-blue light in gastroscopy and fluorescence in cystoscopy)
Styles APA, Harvard, Vancouver, ISO, etc.
20

Coxon, Thomas Liam. « 2D and 3D phenotyping murine models of Amelogenesis imperfecta ». Thesis, University of Liverpool, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539608.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Mastin, Dana Andrew. « Statistical methods for 2D-3D registration of optical and LIDAR images ». Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55123.

Texte intégral
Résumé :
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 121-123).
Fusion of 3D laser radar (LIDAR) imagery and aerial optical imagery is an efficient method for constructing 3D virtual reality models. One difficult aspect of creating such models is registering the optical image with the LIDAR point cloud, which is a camera pose estimation problem. We propose a novel application of mutual information registration which exploits statistical dependencies in urban scenes, using variables such as LIDAR elevation, LIDAR probability of detection (pdet), and optical luminance. We employ the well known downhill simplex optimization to infer camera pose parameters. Utilization of OpenGL and graphics hardware in the optimization process yields registration times on the order of seconds. Using an initial registration comparable to GPS/INS accuracy, we demonstrate the utility of our algorithms with a collection of urban images. Our analysis begins with three basic methods for measuring mutual information. We demonstrate the utility of the mutual information measures with a series of probing experiments and registration tests. We improve the basic algorithms with a novel application of foliage detection, where the use of only non-foliage points improves registration reliability significantly. Finally, we show how the use of an existing registered optical image can be used in conjunction with foliage detection to achieve even more reliable registration.
by Dana Andrew Mastin.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Qiu, Xuchong. « 2D and 3D Geometric Attributes Estimation in Images via deep learning ». Thesis, Marne-la-vallée, ENPC, 2021. http://www.theses.fr/2021ENPC0005.

Texte intégral
Résumé :
La perception visuelle d'attributs géométriques (ex. la translation, la rotation, la taille, etc.) est très importante dans les applications robotiques. Elle permet à un système robotique d'acquérir des connaissances sur son environnement et peut fournir des entrées pour des tâches telles que la localisation d'objets, la compréhension de scènes et la planification de trajectoire. Le principal objectif de cette thèse est d'estimer la position et l'orientation d'objets d'intérêt pour des tâches de manipulation robotique. En particulier, nous nous intéressons à la tâche de bas niveau d'estimation de la relation d'occultation, afin de mieux pouvoir discriminer objets différents, et aux tâches de plus haut niveau de suivi visuel d'objets et d'estimation de leur position et orientation. Le premier axe d'étude est le suivi (tracking) d'un objet d'intérêt dans une vidéo, avec des locations et tailles correctes. Tout d'abord, nous étudions attentivement le cadre du suivi d'objet basé sur des filtres de corrélation discriminants et proposons d'exploiter des informations sémantiques à deux niveaux~: l'étape d'encodage des caractéristiques visuelles et l'étape de localisation de la cible. Nos expériences démontrent que l'usage de la sémantique améliore à la fois les performances de la localisation et de l'estimation de taille de l'objet suivi. Nous effectuons également des analyses pour comprendre les cas d'échec. Le second axe d'étude est l'utilisation d'informations sur la forme des objets pour améliorer la performance de l'estimation de la pose 6D des objets et de son raffinement. Nous proposons d'estimer avec un modèle profond les projections 2D de points 3D à la surface de l'objet, afin de pouvoir calculer la pose 6D de l'objet. Nos résultats montrent que la méthode que nous proposons bénéficie du grand nombre de correspondances de points 3D à 2D et permet d'obtenir une meilleure précision des estimations. Dans un deuxième temps, nous étudions les contraintes des méthodes existantes pour raffiner la pose d'objets et développons une méthode de raffinement des objets dans des contextes arbitraires. Nos expériences montrent que nos modèles, entraînés sur des données réelles ou des données synthétiques générées, peuvent raffiner avec succès les estimations de pose pour les objets dans des contextes quelconques. Le troisième axe de recherche est l'étude de l'occultation géométrique dans des images, dans le but de mieux pouvoir distinguer les objets dans la scène. Nous formalisons d'abord la définition de l'occultation géométrique et proposons une méthode pour générer automatiquement des annotations d'occultation de haute qualité. Ensuite, nous proposons une nouvelle formulation de la relation d'occultation (abbnom) et une méthode d'inférence correspondante. Nos expériences sur les jeux de tests pour l'estimation d'occultations montrent la supériorité de notre formulation et de notre méthode. Afin de déterminer des discontinuités de profondeur précises, nous proposons également une méthode de raffinement de cartes de profondeur et une méthode monoculaire d'estimation de la profondeur en une étape. En utilisant l'estimation de relations d'occultation comme guide, ces deux méthodes atteignent les performances de l'état de l'art. Toutes les méthodes que nous proposons s'appuient sur la polyvalence et la puissance de l'apprentissage profond. Cela devrait faciliter leur intégration dans le module de perception visuelle des systèmes robotiques modernes. Outre les avancées méthodologiques mentionnées ci-dessus, nous avons également rendu publiquement disponibles des logiciels (pour l'estimation de l'occlusion et de la pose) et des jeux de données (informations de haute qualité sur les relations d'occultation) afin de contribuer aux outils offerts à la communauté scientifique
The visual perception of 2D and 3D geometric attributes (e.g. translation, rotation, spatial size and etc.) is important in robotic applications. It helps robotic system build knowledge about its surrounding environment and can serve as the input for down-stream tasks such as motion planning and physical intersection with objects.The main goal of this thesis is to automatically detect positions and poses of interested objects for robotic manipulation tasks. In particular, we are interested in the low-level task of estimating occlusion relationship to discriminate different objects and the high-level tasks of object visual tracking and object pose estimation.The first focus is to track the object of interest with correct locations and sizes in a given video. We first study systematically the tracking framework based on discriminative correlation filter (DCF) and propose to leverage semantics information in two tracking stages: the visual feature encoding stage and the target localization stage. Our experiments demonstrate that the involvement of semantics improves the performance of both localization and size estimation in our DCF-based tracking framework. We also make an analysis for failure cases.The second focus is using object shape information to improve the performance of object 6D pose estimation and do object pose refinement. We propose to estimate the 2D projections of object 3D surface points with deep models to recover object 6D poses. Our results show that the proposed method benefits from the large number of 3D-to-2D point correspondences and achieves better performance. As a second part, we study the constraints of existing object pose refinement methods and develop a pose refinement method for objects in the wild. Our experiments demonstrate that our models trained on either real data or generated synthetic data can refine pose estimates for objects in the wild, even though these objects are not seen during training.The third focus is studying geometric occlusion in single images to better discriminate objects in the scene. We first formalize geometric occlusion definition and propose a method to automatically generate high-quality occlusion annotations. Then we propose a new occlusion relationship formulation (i.e. abbnom) and the corresponding inference method. Experiments on occlusion reasoning benchmarks demonstrate the superiority of the proposed formulation and method. To recover accurate depth discontinuities, we also propose a depth map refinement method and a single-stage monocular depth estimation method.All the methods that we propose leverage on the versatility and power of deep learning. This should facilitate their integration in the visual perception module of modern robotic systems.Besides the above methodological advances, we also made available software (for occlusion and pose estimation) and datasets (of high-quality occlusion information) as a contribution to the scientific community
Styles APA, Harvard, Vancouver, ISO, etc.
23

ARMANDE, NASSER. « Caracterisation de reseaux fins dans les images 2d et 3d applications : images satellites et medicales ». Paris 11, 1997. http://www.theses.fr/1997PA112094.

Texte intégral
Résumé :
Les travaux effectues dans cette these se situent dans la phase de segmentation d'un systeme de vision par ordinateur. La segmentation vise a representer de maniere compacte les informations utiles et pertinentes dans l'image. Afin d'assurer un bon deroulement de cette phase, differents outils mathematiques peuvent etre utilises. La geometrie differentielle et en particulier les proprietes differentielles des surfaces parametrees, ont ete utilisees par de nombreux chercheurs pour la segmentation et le traitement bas niveau des images, et constituent les outils mathematiques de base de nos travaux de recherche. Elles permettent d'une maniere efficace et formelle de resoudre de nombreux problemes lies a la caracterisation et l'extraction de differents types de primitives et d'indice visuels dans les images a niveau de gris. Le travail original realise dans cette these constitue un ensemble d'approches fondees sur les proprietes differentielles des surfaces parametrees. Ces proprietes sont utilisees pour caracteriser et extraire les structures fines, appelees reseaux fins, dans l'image. Notre methodologie consiste a utiliser la representation de l'image sous forme de surface, dite surface image, afin de mettre en evidence un certain nombre de proprietes differentielles de cette derniere. Nous montrons que, parmi ces proprietes, les courbures principales et les directions associees de la surface image permettent de caracteriser et d'extraire les reseaux fins 2d. Une etude de comportement de ces structures dans l'espace echelle est realisee afin de mettre en relation la largeur du reseau fin et l'echelle du traitement. Ce qui conduit a la detection de reseaux fins 2d de differentes largeurs. Nous montrons egalement qu'une simple extension de l'approche en 3d garantit l'extraction de ce type de structure (reseaux fins 3d) dans les images 3d. La recherche de reseaux fins est effectuee sur les images satellites et medicales pour lesquelles ces structures correspondent respectivement aux reseaux routiers et aux vaisseaux sanguins.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Sdiri, Bilel. « 2D/3D Endoscopic image enhancement and analysis for video guided surgery ». Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD030.

Texte intégral
Résumé :
Grâce à l’évolution des procédés de diagnostiques médicaux et les développements technologiques, la chirurgie mini-invasive a fait des progrès remarquables au cours des dernières décennies surtout avec l’innovation de nouveaux outils médicaux tels que les systèmes chirurgicaux robotisés et les caméras endoscopiques sans fil. Cependant, ces techniques souffrent de quelques limitations liées essentiellement l’environnement endoscopique telles que la non uniformité de l’éclairage, les réflexions spéculaires des tissus humides, le faible contraste/netteté et le flou dû aux mouvements du chirurgien et du patient (i.e. la respiration). La correction de ces dégradations repose sur des critères de qualité d’image subjective et objective dans le contexte médical. Il est primordial de développer des solutions d’amélioration de la qualité perceptuelle des images acquises par endoscopie 3D. Ces solutions peuvent servir plus particulièrement dans l’étape d’extraction de points d’intérêts pour la reconstruction 3D des organes, qui sert à la planification de certaines opérations chirurgicales. C’est dans cette optique que cette thèse aborde le problème de la qualité des images endoscopiques en proposant de nouvelles méthodes d’analyse et de rehaussement de contraste des images endoscopiques 2D et 3D.Pour la détection et la classification automatique des anomalies tissulaires pour le diagnostic des maladies du tractus gastro-intestinal, nous avons proposé une méthode de rehaussement de contraste local et global des images endoscopiques 2D classiques et pour l’endoscopie capsulaire sans fil.La méthode proposée améliore la visibilité des structures locales fines et des détails de tissus. Ce prétraitement a permis de faciliter le processus de détection des points caractéristiques et d’améliorer le taux de classification automatique des tissus néoplasiques et tumeurs bénignes. Les méthodes développées exploitent également la propriété d’attention visuelle et de perception de relief en stéréovision. Dans ce contexte, nous avons proposé une technique adaptative d’amélioration de la qualité des images stéréo endoscopiques combinant l’information de profondeur et les contours des tissues. Pour rendre la méthode plus efficace et adaptée aux images 3Dl e rehaussement de contraste est ajusté en fonction des caractéristiques locales de l’image et du niveau de profondeur dans la scène tout en contrôlant le traitement inter-vues par un modèle de perception binoculaire.Un test subjectif a été mené pour évaluer la performance de l’algorithme proposé en termes de qualité visuelle des images générées par des observateurs experts et non experts dont les scores ont démontré l’efficacité de notre technique 3D d’amélioration du contraste. Dans cette même optique,nous avons développé une autre technique de rehaussement du contraste des images endoscopiques stéréo basée sur la décomposition en ondelettes.Ce qui offre la possibilité d’effectuer un traitement multi-échelle et d’opérer une traitement sélectif. Le schéma proposé repose sur un traitement stéréo qui exploite à la fois l’informations de profondeur et les redondances intervues,ainsi que certaines propriétés du système visuel humain, notamment la sensibilité au contraste et à la rivalité/combinaison binoculaire. La qualité visuelle des images traitées et les mesures de qualité objective démontrent l’efficacité de notre méthode qui ajuste l’éclairage des images dans les régions sombres et saturées et accentue la visibilité des détails liés aux vaisseaux sanguins et les textures de tissues
Minimally invasive surgery has made remarkable progress in the last decades and became a very popular diagnosis and treatment tool, especially with the rapid medical and technological advances leading to innovative new tools such as robotic surgical systems and wireless capsule endoscopy. Due to the intrinsic characteristics of the endoscopic environment including dynamic illumination conditions and moist tissues with high reflectance, endoscopic images suffer often from several degradations such as large dark regions,with low contrast and sharpness, and many artifacts such as specular reflections and blur. These challenges together with the introduction of three dimensional(3D) imaging surgical systems have prompted the question of endoscopic images quality, which needs to be enhanced. The latter process aims either to provide the surgeons/doctors with a better visual feedback or improve the outcomes of some subsequent tasks such as features extraction for 3D organ reconstruction and registration. This thesis addresses the problem of endoscopic image quality enhancement by proposing novel enhancement techniques for both two-dimensional (2D) and stereo (i.e. 3D)endoscopic images.In the context of automatic tissue abnormality detection and classification for gastro-intestinal tract disease diagnosis, we proposed a pre-processing enhancement method for 2D endoscopic images and wireless capsule endoscopy improving both local and global contrast. The proposed method expose inner subtle structures and tissues details, which improves the features detection process and the automatic classification rate of neoplastic,non-neoplastic and inflammatory tissues. Inspired by binocular vision attention features of the human visual system, we proposed in another workan adaptive enhancement technique for stereo endoscopic images combining depth and edginess information. The adaptability of the proposed method consists in adjusting the enhancement to both local image activity and depth level within the scene while controlling the interview difference using abinocular perception model. A subjective experiment was conducted to evaluate the performance of the proposed algorithm in terms of visual qualityby both expert and non-expert observers whose scores demonstrated the efficiency of our 3D contrast enhancement technique. In the same scope, we resort in another recent stereo endoscopic image enhancement work to the wavelet domain to target the enhancement towards specific image components using the multiscale representation and the efficient space-frequency localization property. The proposed joint enhancement methods rely on cross-view processing and depth information, for both the wavelet decomposition and the enhancement steps, to exploit the inter-view redundancies together with perceptual human visual system properties related to contrast sensitivity and binocular combination and rivalry. The visual qualityof the processed images and objective assessment metrics demonstrate the efficiency of our joint stereo enhancement in adjusting the image illuminationin both dark and saturated regions and emphasizing local image details such as fine veins and micro vessels, compared to other endoscopic enhancement techniques for 2D and 3D images
Styles APA, Harvard, Vancouver, ISO, etc.
25

Linden, d’Hooghvorst Rodríguez Jean Joseph van der. « Geomechanical study of the Tarfaya basin, West African coast, using 3D/2D static models and 2D evolutionary models ». Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/672449.

Texte intégral
Résumé :
This thesis uses different variants of geomechanical modelling approaches to investigate stress, strain and geometry distribution and evolution through time of the Tarfaya salt basin, located on the West African coast. This work has been conducted by geomechanically simulating a sector of the Tarfaya basin containing key features such as diapirs, faults and encasing sediments using 3D and 2D static models and 2D evolutionary models. The 3D and 2D static geomechanical models of the Tarfaya basin system allowed to predict the stresses and strains at present day. Both models are based on present-day basin geometries extracted from seismic data and use a poro-elastic description for the sediments based on calibrated log data and a visco-plastic description for the salt based on values from Avery Island. The models predict a significant horizontal stress reduction in the sediments located at the top of the principal salt structure, the Sandia diapir, consistent with wellbore data. However, the 2D static geomechanical model shows broader areas affected by the stress reduction compared to the 3D model and overestimates its magnitude by less than 1.5 MPa. These results highlight the possibility of using 2D static modelling as a valid approximation to the more complex and time-consuming 3D static models. A more in-depth study of the 2D static model using sensitivity analysis yielded a series of interesting observations: (1) the salt bodies and their geometry have the strongest impact on the final model results; (2) the elastic properties of the sediments do not impact the model results. In other words, the correct definition of the sediments with the highest material contrasts such as salt should be a priority when building static models. Such definition should be ranked ahead of the precise determination of the rheologic parameters for the sediments present in the basin. In this thesis, we also present the results of introducing an evolutionary geomechanical modelling approach to the Tarfaya basin. This study incorporates information of burial history, sea floor geometry and tectonic loads from a sequential kinematic restoration model to geologically constrain the 2D evolutionary geomechanical model. The sediments in the model follow a poro-elastoplastic description and the salt follows a visco-plastic description. The 2D evolutionary model predicts a similar Sandia diapir evolution when compared to the kinematic restoration. This proves this approach can offer a significant advance in the study of the basin, by not only providing the stress and strain distribution and salt geometry at present day, but also reproducing their evolution during the Tarfaya basin history. Sensitivity analysis on the evolutionary model indicates that temporal and spatial variation in sedimentation rate is a key control on the kinematic structural evolution of the salt system. The variation of sedimentation rates in the model controls whether the modelled salt body gets buried by Tertiary sediments (after a continuous growth during the Jurassic and Cretaceous periods) or is able to remain active until the present day. Also, the imposed shortening affects the final stress distribution of the sediments at the present day. To conclude, the results obtained during this study allowed us to understand the formation and evolution of the diapirs in the Tarfaya basin using carefully built geomechanical models. The study demonstrates that carefully built 2D static models can provide information comparable to the 3D models, but without the time and computational power requirements of the 3D models. That makes the 2D approach very appropriate for the exploration stages of a particular prospect. If carefully built, such 2D models can approximate and yield useful information, even from complex 3D structures such as the Tarfaya basin salt structures. This thesis also concludes that incorporating kinematic restoration data into 2D evolutionary models provides insights into the key parameters controlling the evolution of the studied system. Furthermore, it enables more realistic geomechanical models, which, in turn, provide more insights into sediment stress and porosity.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Chang, Xianglong. « Semi-automatic fitting of deformable 3D models to 2D sketches ». Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/797.

Texte intégral
Résumé :
We present a novel method for building 3D models from a user sketch. Given a 2D sketch as input, the approach aligns and deforms a chosen 3D template model to match the sketch. This is guided by a set of user-specified correspondences and an algorithm that deforms the 3D model to match the sketched profile. Our primary contribution is related to fitting the 3D deformable geometry to the 2D user sketch. We demonstrate our technique on several examples.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Randell, Charles James. « 3D underwater monocular machine vision from 2D images in an attenuating medium ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ32764.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Law, Kwok-wai Albert, et 羅國偉. « 3D reconstruction of coronary artery and brain tumor from 2D medical images ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31245572.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Zöllei, Lilla 1977. « 2D-3D rigid-body registration of X-ray flourscopy and CT images ». Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86790.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Reddy, Serendra. « Automatic 2D-to-3D conversion of single low depth-of-field images ». Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24475.

Texte intégral
Résumé :
This research presents a novel approach to the automatic rendering of 3D stereoscopic disparity image pairs from single 2D low depth-of-field (LDOF) images. Initially a depth map is produced through the assignment of depth to every delineated object and region in the image. Subsequently the left and right disparity images are produced through depth imagebased rendering (DIBR). The objects and regions in the image are initially assigned to one of six proposed groups or labels. Labelling is performed in two stages. The first involves the delineation of the dominant object-of-interest (OOI). The second involves the global object and region grouping of the non-OOI regions. The matting of the OOI is also performed in two stages. Initially the in focus foreground or region-of-interest (ROI) is separated from the out of focus background. This is achieved through the correlation of edge, gradient and higher-order statistics (HOS) saliencies. Refinement of the ROI is performed using k-means segmentation and CIEDE2000 colour-difference matching. Subsequently the OOI is extracted from within the ROI through analysis of the dominant gradients and edge saliencies together with k-means segmentation. Depth is assigned to each of the six labels by correlating Gestalt-based principles with vanishing point estimation, gradient plane approximation and depth from defocus (DfD). To minimise some of the dis-occlusions that are generated through the 3D warping sub-process within the DIBR process the depth map is pre-smoothed using an asymmetric bilateral filter. Hole-filling of the remaining dis-occlusions is performed through nearest-neighbour horizontal interpolation, which incorporates depth as well as direction of warp. To minimising the effects of the lateral striations, specific directional Gaussian and circular averaging smoothing is applied independently to each view, with additional average filtering applied to the border transitions. Each stage of the proposed model is benchmarked against data from several significant publications. Novel contributions are made in the sub-speciality fields of ROI estimation, OOI matting, LDOF image classification, Gestalt-based region categorisation, vanishing point detection, relative depth assignment and hole-filling or inpainting. An important contribution is made towards the overall knowledge base of automatic 2D-to-3D conversion techniques, through the collation of existing information, expansion of existing methods and development of newer concepts.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Zhou, Dianle. « Using 3D morphable models for 3D photo-realistic personalized avatars and 2D face recognition ». Thesis, Evry, Institut national des télécommunications, 2011. http://www.theses.fr/2011TELE0017/document.

Texte intégral
Résumé :
[Non communiqué]
In the past decade, 3D statistical face model (3D Morphable Model) has received much attention by both the commercial and public sectors. It can be used for face modeling for photo-realistic personalized 3D avatars and for the application 2D face recognition technique in biometrics. This thesis describes how to achieve an automatic 3D face reconstruction system that could be helpful for building photo-realistic personalized 3D avatars and for 2D face recognition with pose variability. The first systems we propose Combined Active Shape Model for 2D frontal facial landmark location and its application in 2D frontal face recognition in degraded condition. The second proposal is 3D Active Shape Model (3D-ASM) algorithm which is presented to automatically locate facial landmarks from different views. The third contribution is to use biometric data (2D images and 3D scan ground truth) for quantitatively evaluating the 3D face reconstruction. Finally, we address the issue of automatic 2D face recognition across pose using 3D Morphable Model
Styles APA, Harvard, Vancouver, ISO, etc.
32

Gomez, Abraham. « 2D concept to 3D game model : Production of 3D models for top down games ». Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64297.

Texte intégral
Résumé :
This work goes through the mindset and ways of analyzing the problem of realizing 2D concept art into the 3D game model used in game. Specifically targeted for top down games and the effects the top down view have on the 3D game model. Studies in Principles and design concepts are used to create a 3D game character that works well for the top down view. The method used where an experimental study which resulted in a 3D model and implementation of principles and design concepts.  Based on the results it is concluded that it could be a useful tool. In the discussion a deeper analysis is conducted and it´s concluded that further research are necessary and the purpose and questions were answered.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Chaudhary, Priyanka. « SPHEROID DETECTION IN 2D IMAGES USING CIRCULAR HOUGH TRANSFORM ». UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/9.

Texte intégral
Résumé :
Three-dimensional endothelial cell sprouting assay (3D-ECSA) exhibits differentiation of endothelial cells into sprouting structures inside a 3D matrix of collagen I. It is a screening tool to study endothelial cell behavior and identification of angiogenesis inhibitors. The shape and size of an EC spheroid (aggregation of ~ 750 cells) is important with respect to its growth performance in presence of angiogenic stimulators. Apparently, tubules formed on malformed spheroids lack homogeneity in terms of density and length. This requires segregation of well formed spheroids from malformed ones to obtain better performance metrics. We aim to develop and validate an automated imaging software analysis tool, as a part of a High-content High throughput screening (HC-HTS) assay platform, to exploit 3D-ECSA as a differential HTS assay. We present a solution using Circular Hough Transform to detect a nearly perfect spheroid as per its circular shape in a 2D image. This successfully enables us to differentiate and separate good spheroids from the malformed ones using automated test bench.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Calafato, Giulia <1993&gt. « Cytotoxic effect of immunotoxins in 2D and 3D models of sarcoma ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10050/1/Tesi%20AMS%20Tesi%20di%20Dottorato%20Giulia%20Calafato.pdf.

Texte intégral
Résumé :
Sarcomas are one of the most common types of cancer in children and comprise more than 50 heterogeneous histotypes. The standard therapeutic regimen, which includes surgery, chemotherapy and radiotherapy, does not often prove to be decisive. To improve patient outcome, alternative therapeutic strategies are being evaluated. Among these, RIP-containing immunotoxins (ITs) represent an innovative approach because of their high tumor specificity and cytotoxicity. Recently, in order to evaluate the efficacy of new drugs and develop personalized therapeutic protocols, three-dimensional models (i.e. spheroids and organoids) are emerging as valid tools in cancer research. My PhD project aimed to evaluate the cytotoxic effect of specific ITs, Tf-SO6, αEGFR1-Ocy and αHer2-Ocy, directed against TfR1, EGFR1 and Her2, in 2D (adherent cells) and 3D models (spheroids and organoids) of sarcoma. The results obtained showed that TfR1, EGFR1 and Her2 are highly expressed in our sarcoma models, and could be a possible target for immunotherapy. All tested ITs showed high specific cytotoxicity in 2D and 3D models, with IC50 values in nM range. Caspase 3/7 are highly activated after IT treatments in all cell models, but with different timing.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Agerskov, Niels, et Gabriel Carrizo. « Application for Deriving 2D Images from 3D CT Image Data for Research Purposes ». Thesis, KTH, Skolan för teknik och hälsa (STH), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190881.

Texte intégral
Résumé :
Karolinska University Hospital, Huddinge, Sweden, has long desired to plan hip prostheses with Computed Tomography (CT) scans instead of plain radiographs to save time and patient discomfort. This has not been possible previously as their current software is limited to prosthesis planning on traditional 2D X-ray images. The purpose of this project was therefore to create an application (software) that allows medical professionals to derive a 2D image from CT images that can be used for prosthesis planning. In order to create the application NumPy and The Visualization Toolkit (VTK) Python code libraries were utilised and tied together with a graphical user interface library called PyQt4. The application includes a graphical interface and methods for optimizing the images for prosthesis planning. The application was finished and serves its purpose but the quality of the images needs to be evaluated with a larger sample group.
På Karolinska universitetssjukhuset, Huddinge har man länge önskat möjligheten att utföra mallningar av höftproteser med hjälp av data från datortomografiundersökningar (DT). Detta har hittills inte varit möjligt eftersom programmet som används för mallning av höftproteser enbart accepterar traditionella slätröntgenbilder. Därför var syftet med detta projekt att skapa en mjukvaru-applikation som kan användas för att generera 2D-bilder för mallning av proteser från DT-data. För att skapa applikationen användes huvudsakligen Python-kodbiblioteken NumPy och The Visualization Toolkit (VTK) tillsammans med användargränssnittsbiblioteket PyQt4. I applikationen ingår ett grafiskt användargränssnitt och metoder för optimering av bilderna i mallningssammanhang. Applikationen fungerar men bildernas kvalitet måste utvärderas med en större urvalsgrupp.
Styles APA, Harvard, Vancouver, ISO, etc.
36

North, Peter R. J. « The reconstruction of visual appearance by combining stereo surfaces ». Thesis, University of Sussex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362837.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Lumpkins, Sarah B. « Space radiation-induced bystander signaling in 2D and 3D skin tissue models ». Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/70817.

Texte intégral
Résumé :
Thesis (Sc. D.)--Harvard-MIT Program in Health Sciences and Technology, 2012.
Page 157 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 145-156).
Space radiation poses a significant hazard to astronauts on long-duration missions, and the low fluences of charged particles characteristic of this field suggest that bystander effects, the phenomenon in which a greater number of cells exhibit damage than expected based on the number of cells traversed by radiation, could be significant contributors to overall cell damage. The purpose of this thesis was to investigate bystander effects due to signaling between different cell types cultured within 2D and 3D tissue architectures. 2D bystander signaling was investigated using a transwell insert system in which normal human fibroblasts (A) and keratinocytes (K) were irradiated with 1 GeV/n protons or iron ions at the NASA Space Radiation Laboratory using doses from either 2 Gy (protons) or 1 Gy (iron ions) down to spacerelevant low fluences. Medium-mediated bystander responses were investigated using three cell signaling combinations. Bystander signaling was also investigated in a 3D model by developing tissue constructs consisting of fibroblasts embedded in a collagen matrix with a keratinocyte epidermal layer. Bystander experiments were conducted by splitting each construct in half and exposing half to radiation then placing the other half in direct contact with the irradiated tissue on a transwell insert. Cell damage was evaluated primarily as formation of foci of the DNA repair-related protein 53BP1. In the 2D system, both protons and iron ions yielded a strong dose dependence for the induction of 53BP1 in irradiated cells, while the magnitudes and time courses of bystander responses were dependent on radiation quality. Furthermore, bystander effects were present in all three cell signaling combinations even at the low proton particle fluences used, suggesting the potential importance of including these effects in cancer risk models for low-dose space radiation exposures. Cells cultured in the 3D constructs exhibited a significant reduction in the percentages of both direct and bystander cells positive for 53BP1 foci, although the qualitative kinetics of DNA damage and repair were similar to those observed in 2D. These results provide evidence that the microenvironment significantly influences intercellular signaling and that cells may be more radioresistant in 3D compared to 2D systems.
by Sarah B. Lumpkins.
Sc.D.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Baudour, Alexis. « Détection de filaments dans des images 2D et 3D : modélisation, étude mathématique et algorithmes ». Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00507520.

Texte intégral
Résumé :
Cette thèse aborde le problème de la modélisation et de la détection des filaments dans des images 3D. Nous avons développé des méthodes variationnelles pour quatre applications spécifiques : l'extraction de routes où nous avons introduit la notion de courbure totale pour conserver les réseaux réguliers en tolérant les discontinuités de direction ; la détection et la complétion de filaments fortement bruités et présentant des occultations. Nous avons utilisé la magnétostatique et la théorie de Ginzburg-Landau pour représenter les filaments comme ensemble de singularités d'un champ vectoriel ; la détection de filaments dans des images biologiques acquises en microscopie confocale. On modélise les filaments en tenant compte des spécificités de cette dernière. Les filaments sont alors obtenus par une méthode de maximum à posteriori ; la détection de cibles dans des séquences d'images infrarouges. Dans cette application, on cherche des trajectoires optimisant la différence de luminosité moyenne entre la trajectoire et son voisinage en tenant compte des capteurs utilisés. Par ailleurs, nous avons démontré des résultats théoriques portant sur la courbure totale et la convergence de la méthode d'Alouges associée aux systèmes de Ginzburg-Landau. Ce travail réunit à la fois modélisation, résultats théoriques et recherche d'algorithmes numériques performants permettant de traiter de réelles applications.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Sintorn, Ida-Maria. « Segmentation methods and shape descriptions in digital images : applications in 2D and 3D microscopy / ». Uppsala : Centre for Image Analysis, Swedish University of Agricultural Sciences, 2005. http://epsilon.slu.se/200520.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

BEIL, FRANK MICHAEL. « Approche structurelle de l'analyse de la texture dans les images cellulaires 2d et 3d ». Paris 7, 1999. http://www.theses.fr/1999PA077019.

Texte intégral
Résumé :
L'analyse d'images microscopiques par ordinateur est une methode puissante pour etudier la structure des cellules. La definition d'un modele approprie a la comprehension de l'information contenue dans une image est un prerequis pour que l'analyse d'image automatisee soit fiable. La notion de texture est utilisee pour decrire des phenomenes non figuratifs d'images. Les modeles structurels de texture considerent la texture comme des ensembles d'elements de base agences en motifs specifiques. Partant de cette approche, nous avons elabore un modele dual de texture pour l'analyse d'architectures cellulaires. Ce modele permet de caracteriser des figures correspondant aussi bien a des lignes qu'a des regions. Nous avons developpe des algorithmes de segmentations de regions et de motifs de lignes. Une base de donnees, refletant la dualite du modele, permet l'analyse de ces deux types de texture avec les memes algorithmes. Les methodes developpees dans ce travail ont ete appliquees a l'analyse de la structure du cytosquelette et a l'analyse de la texture de la chromatine nucleaire dans des images bidimensionnelles (2d). Notre approche a prouve son efficacite dans la caracterisation des reseaux de filaments intermediaires pendant le developpement foetal du foie de rat. L'apparence de la chromatine est determinee par la distribution tridimensionnelle (3d) de l'adn. L'analyse de ce processus avec des methodes 2d ne nous a pas semble convenir a la classification des lesions du col de l'uterus en diagnostic pathologique. Nous avons etendu notre approche 2d a l'analyse de texture 3d, mises en evidence en microscopie confocale a balayage laser. Dans une etude pilote, les parametres de texture 3d ont permis de diagnostiquer de maniere tres precise des lesions de la prostate.
Styles APA, Harvard, Vancouver, ISO, etc.
41

MEZERREG, MOHAMED. « Structures de donnees graphiques : contribution a la conception d'un s.g.b.d. images 2d et 3d ». Paris 7, 1990. http://www.theses.fr/1990PA077155.

Texte intégral
Résumé :
Cette these aborde le probleme de la structuration des donnees relatives aux images 2d et 3d, et leur transformation sous forme d'un schema relationnel. Apres un apercu sur les problemes poses par les systemes graphiques en matiere de structures de donnees, deux modeles de description d'images sont presentes: le modele syntaxique et le modele bases de donnees. Pour representer les images 2d, la structure de donnees graphique quadkey est proposee. Outre un gain tres considerable en espace memoire et en temps de calcul, le quadkey presente l'avantage majeur de transformer les donnees images sous forme de relations adaptees au modele de bases de donnees relationnel. Le quadkey est ensuite etendu en octkey pour representer les images 3d. En utilisant l'octkey, deux methodes de reconstitution d'objets 3d a partir des projections 2d sont decrites. La premiere methode, reconstitue l'objet 3d par intersection des trois faces de vision d'objet: x-y, y-z, et z-x. La seconde, le reconstitue par fusion de ses coupes en serie. Enfin, un systeme de gestion de bases de donnees images (s. G. B. D. I. ) est presente. Base sur le modele relationnel, cet s. G. B. D. I. Utilise les structures de donnees graphiques quadkey et octkey. Il est dote d'un langage de requete non procedural et d'une bibliotheque graphique
Styles APA, Harvard, Vancouver, ISO, etc.
42

ARAÚJO, Caio Fernandes. « Segmentação de imagens 3D utilizando combinação de imagens 2D ». Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/21040.

Texte intégral
Résumé :
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-08-30T18:18:41Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertacao Caio Fernandes Araujo Versão Biblioteca.pdf: 4719896 bytes, checksum: 223db1c4382e6f970dc2cd659978ab60 (MD5)
Made available in DSpace on 2017-08-30T18:18:42Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertacao Caio Fernandes Araujo Versão Biblioteca.pdf: 4719896 bytes, checksum: 223db1c4382e6f970dc2cd659978ab60 (MD5) Previous issue date: 2016-08-12
CAPES
Segmentar imagens de maneira automática é um grande desafio. Apesar do ser humano conseguir fazer essa distinção, em muitos casos, para um computador essa divisão pode não ser tão trivial. Vários aspectos têm de ser levados em consideração, que podem incluir cor, posição, vizinhanças, textura, entre outros. Esse desafio aumenta quando se passa a utilizar imagens médicas, como as ressonâncias magnéticas, pois essas, além de possuírem diferentes formatos dos órgãos em diferentes pessoas, possuem áreas em que a variação da intensidade dos pixels se mostra bastante sutil entre os vizinhos, o que dificulta a segmentação automática. Além disso, a variação citada não permite que haja um formato pré-definido em vários casos, pois as diferenças internas nos corpos dos pacientes, especialmente os que possuem alguma patologia, podem ser grandes demais para que se haja uma generalização. Mas justamente por esse possuírem esses problemas, são os principais focos dos profissionais que analisam as imagens médicas. Este trabalho visa, portanto, contribuir para a melhoria da segmentação dessas imagens médicas. Para isso, utiliza a ideia do Bagging de gerar diferentes imagens 2D para segmentar a partir de uma única imagem 3D, e conceitos de combinação de classificadores para uni-las, para assim conseguir resultados estatisticamente melhores, se comparados aos métodos populares de segmentação. Para se verificar a eficácia do método proposto, a segmentação das imagens foi feita utilizando quatro técnicas de segmentação diferentes, e seus resultados combinados. As técnicas escolhidas foram: binarização pelo método de Otsu, o K-Means, rede neural SOM e o modelo estatístico GMM. As imagens utilizadas nos experimentos foram imagens reais, de ressonâncias magnéticas do cérebro, e o intuito do trabalho foi segmentar a matéria cinza do cérebro. As imagens foram todas em 3D, e as segmentações foram feitas em fatias 2D da imagem original, que antes passa por uma fase de pré-processamento, onde há a extração do cérebro do crânio. Os resultados obtidos mostram que o método proposto se mostrou bem sucedido, uma vez que, em todas as técnicas utilizadas, houve uma melhoria na taxa de acerto da segmentação, comprovada através do teste estatístico T-Teste. Assim, o trabalho mostra que utilizar os princípios de combinação de classificadores em segmentações de imagens médicas pode apresentar resultados melhores.
Automatic image segmentation is still a great challenge today. Despite the human being able to make this distinction, in most of the cases easily and quickly, to a computer this task may not be that trivial. Several characteristics have to be taken into account by the computer, which may include color, position, neighborhoods, texture, among others. This challenge increases greatly when it comes to using medical images, like the MRI, as these besides producing images of organs with different formats in different people, have regions where the intensity variation of pixels is subtle between neighboring pixels, which complicates even more the automatic segmentation. Furthermore, the above mentioned variation does not allow a pre-defined format in various cases, because the internal differences between patients bodies, especially those with a pathology, may be too large to make a generalization. But specially for having this kind of problem, those people are the main targets of the professionals that analyze medical images. This work, therefore, tries to contribute to the segmentation of medical images. For this, it uses the idea of Bagging to generate different 2D images from a single 3D image, and combination of classifiers to unite them, to achieve statistically significant better results, if compared to popular segmentation methods. To verify the effectiveness of the proposed method, the segmentation of the images is performed using four different segmentation techniques, and their combined results. The chosen techniques are the binarization by the Otsu method, K-Means, the neural network SOM and the statistical model GMM. The images used in the experiments were real MRI of the brain, and the dissertation objective is to segment the gray matter (GM) of the brain. The images are all in 3D, and the segmentations are made using 2D slices of the original image that pass through a preprocessing stage before, where the brain is extracted from the skull. The results show that the proposed method is successful, since, in all the applied techniques, there is an improvement in the accuracy rate, proved by the statistical test T-Test. Thus, the work shows that using the principles of combination of classifiers in medical image segmentation can obtain better results.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Lubniewski, Pawel. « Recalage 3D/2D d'images pour le traitement endovasculaire des dissections aortiques ». Thesis, Clermont-Ferrand 1, 2014. http://www.theses.fr/2014CLF1MM24/document.

Texte intégral
Résumé :
Nous présentons dans cette étude nos travaux concernant le recalage 3D/2D d'images de dissection aortique. Son but est de de proposer une visualisation de données médicales, qui pourra servir dans le contexte de l'assistance peropératoire durant les procédures endovasculaires.Pour effectuer cette tâche, nous avons proposé un modèle paramétrique de l'aorte, appelé enveloppe tubulaire. Il sert à exprimer la forme globale et les déformations de l'aorte, à l'aide d'un nombre minimal de paramètres.L'enveloppe tubulaire est utilisée par les algorithmes de recalage proposés dans cette étude.Notre méthode originale consiste à proposer un recalage par calcul direct de la transformation entre image 2D, i.e. sans procéssus d'optimisation, et est appelée recalage par ITD .Les descripteurs, que nous avons définis pour le cas des images d'aorte, permettent de trouver rapidement un alignement grossier des données. Nous proposons également l'extension de notre approche pour la mise en correspondance des images 3Det 2D.La chaîne complète du recalage 3D/2D, que nous présentons dans ce document, est composée de la technique ITD et de méthodes précises iconiques et hybrides. L'intégration de notre algorithme basé sur les descripteurs en tant qu'étape d'initialisation réduit le temps de calcul nécessaire et augmente l'efficacité du recalage, par rapport aux approches classiques.Nous avons testé nos méthodes avec des images médicales, issues de patients trîtés par procédures endovasculaires. Les résultats ont été vérifiés par les spécialistes cliniques et ont été jugés satisfaisants; notre chaine de recalage pourrait ainsi être exploitée dans les salles d'interventions à l'avenir
In this study, we present our works related to 3D/2D image registrationfor aorti dissition. Its aim is to propose a visualization of medial datawhih an be used by physians during endovas ular proedures.For this purpose, we have proposed a parametrimodel of aorta, alleda Tubular Envelope. It is used to express the global shape and deformationsof the aorta, by a minimal number of parameters. The tubular envelope isused in our image registration algorithms.The registration by ITD (Image Transformation Descriptors) is our ori-ginal method of image alignment : itomputes the rigid 2D transformation between data sets diretly, without any optimization process.We provide thedefinition of this method, as well as the proposition of several descriptors' formulae, in the base of images of aorta. The technique allows us to quickly and a poarse alignment between data. We also propose the extension of theoriginal approach for the registration of 3D and 2D images.The complete chain of 3D/2D image registration techniques, proposedin this document, consists of the ITD stage, followed by an intensity basedhybrid method. The use of our 3D/2D algorithm, based on the image trans-formation descriptors as an initialization phase, reduces the computing timeand improves the efficiency of the presented approach.We have tested our registration methods for the medical images of several patients after endovasular treatment. Results have been approved by our clinical specialists and our approach.We have tested our registration methods for the medical images of several patients after endovascular treatment. Results have been approved by our clinical specialists and our approach may appear in the intervention rooms in the futur
Styles APA, Harvard, Vancouver, ISO, etc.
44

ZAPPINO, ENRICO. « Variable kinematic 1D, 2D and 3D Models for the Analysis of Aerospace Structures ». Doctoral thesis, Politecnico di Torino, 2014. http://hdl.handle.net/11583/2573739.

Texte intégral
Résumé :
The aerospace structure design is one of the most challenging field in the mechanical engineering. The advanced structural configurations, introduced to satisfy the weight and strength requirements, require advanced analysis techniques able to predict complex physical phenomena. Finite Element Method, FEM, is one of the most used approach to perform analyses of complex structures. The use of FEM method allows the classical structural models to be used to investigate complex structures where a close form solution is not available. The FEM formulation can be easily implemented in automatic calculation routines therefore this approach can take advantage of the improvements of computers. In the last fifty years many commercial codes, base on FEM, has been developed and commercialized, as examples it is possible to refer to Nastran R by MSC or Abaqus R by Dassault Systémes. All the commercial codes are based on classical structural models. The beam model are based on Euler-Bernoulli or Timoshenko theories while two-dimensional models deal with Kirchhoff or Mindlin theories. The limitations introduced by the kinematic assumptions of such theories make the FEM elements based oh these models inef- fective in the analysis of advanced structures. The physical phenomena introduced by composite and smart materials, multi-field application and unconventional loads configurations can not be investigated using the classical FEM models, where the only solution improvement can be reached by refining the mesh and increasing the number of degrees of freedom. This scenario makes the development of advanced structural models very attractive in the structural engineering. With the development of new materials and structural solutions, a number of new structural models have been introduced in order to perform an accurate design of advanced structures. Classical structural model have been im- proved introducing more refined kinematics formulation. One- and two- dimensional models are widely used in aerospace structure design, the limitations introduced by the classical models have been overcame by introducing refined kinematic formulations able to deal with the complexities of the problems. On the other hand, while in the classical models each point is characterized by 3 translations and 3 rotations, the use of advanced models with complex kinematic introduces a number of complication in the analysis of complex geometries, in fact is much more difficult to combine models with different kinematics. The aim of this thesis is to develop new approaches that allow different kinematic models to be used in the same structural analysis. The advanced models used in the present thesis have been derived using the Carrera Unified Formulations, CUF. The CUF allows any structural model do be derived by means of a general formulation independent from the kinematics assumed by the theory. One-, two- and three- dimensional models are derived using the same approach. These models are therefore combined together using different techniques in order to perform structural analysis of complex structures. The results show the capabilities of the present approach to deal with the analysis of typical complex aerospace structure. The performances of variable kinematics models have been investigated and many assessment have been proposed. This walled structure, reinforced structure and composite and sandwich material have been con- sidered. The advanced models introduced in this thesis have been used to perform static, dynamic and aeroelastic analysis in order to highlight the capabilities of the approach in different field. The results show that the present models are able to provide accurate results with a strong reduction in the computational cost with respect classical approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Grandi, Jerônimo Gustavo. « Multidimensional similarity search for 2D-3D medical data correlation and fusion ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/104133.

Texte intégral
Résumé :
Imagens da anatomia interna são essenciais para as práticas médicas. Estabelecer correlação entre elas, é um importante procedimento para diagnóstico e tratamento. Nessa dissertação, é proposta uma abordagem para correlacionar dados multidimensionais de mesma modalidade de aquisição baseando-se somente nas informações de intensidade de pixels e voxels. O trabalho foi dividido em duas fases de implementação. Na primeira, foi explorado o problema de similaridade entre imagens médicas usando a perspectiva de análise de qualidade de imagem. Isso levou ao desenvolvimento de uma técnica de dois passos que estabelece um equilíbrio entre a velocidade de processamento e precisão de duas abordagens conhecidas. Avaliou-se a qualidade e aplicabilidade do algoritmo e, na segunda fase, o método foi estendido para analisar similaridade e encontrar a localização de uma imagem arbitrária (2D) em um volume (3D). A solução minimiza o número virtualmente infinito de possíveis orientações transversais e usa otimizações para reduzir a carga de trabalho e entregar resultados precisos. Uma visualização tridimensional volumétrica funde o volume (3D) com a imagem (2D) estabelecendo uma correspondência entre os dados. Uma análise experimental demonstrou que, apesar da complexidade computacional do algoritmo, o uso de amostragem, tanto na imagem quanto no volume, permite alcançar um bom equilíbrio entre desempenho e precisão, mesmo quando realizada com conjuntos de dados de baixa intensidade de gradiente.
Images of the inner anatomy are essential for clinical practice. To establish a correlation between them is an important procedure for diagnosis and treatment. In this thesis, we propose an approach to correlate within-modality 2D and 3D data from ordinary acquisition protocols based solely on the pixel/voxel information. The work was divided into two development phases. First, we explored the similarity problem between medical images using the perspective of image quality assessment. It led to the development of a 2-step technique that settles the compromise between processing speed and precision of two known approaches. We evaluated the quality and applicability of the 2-step and, in the second phase, we extended the method to use similarity analysis to, given an arbitrary slice image (2D), find the location of this slice within the volume data (3D). The solution minimizes the virtually infinite number of possible cross section orientations and uses optimizations to reduce the computational workload and output accurate results. The matching is displayed in a volumetric three-dimensional visualization fusing the 3D with the 2D. An experimental analysis demonstrated that despite the computational complexity of the algorithm, the use of severe data sampling allows achieving a great compromise between performance and accuracy even when performed with low gradient intensity datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Kang, Xin, et 康欣. « Feature-based 2D-3D registration and 3D reconstruction from a limited number of images via statistical inference for image-guidedinterventions ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48079625.

Texte intégral
Résumé :
Traditional open interventions have been progressively replaced with minimally invasive techniques. Most notably, direct visual feedback is transitioned into indirect, image-based feedback, leading to the wide use of image-guided interventions (IGIs). One essential process of all IGIs is to align some 3D data with 2D images of patient through a procedure called 3D-2D registration during interventions to provide better guidance and richer information. When the 3D data is unavailable, a realistic 3D patient-speci_c model needs to be constructed from a few 2D images. The dominating methods that use only image intensity have narrow convergence range and are not robust to foreign objects presented in 2D images but not existed in 3D data. Feature-based methods partly addressed these problems, but most of them heavily rely on a set of \best" paired correspondences and requires clean image features. Moreover, the optimization procedures used in both kinds of methods are not e_cient. In this dissertation, two topics have been studied and novel algorithms proposed, namely, contour extraction from X-ray images and feature-based rigid/deformable 3D-2D registration. Inspired by biological and neuropsychological characteristics of primary visual cortex (V1), a contour detector is proposed for simultaneously extracting edges and lines in images. The synergy of V1 neurons is mimicked using phase congruency and tensor voting. Evaluations and comparisons showed that the proposed method outperformed several commonly used methods and the results are consistent with human perception. Moreover, the cumbersome \_ne-tuning" of parameter values is not always necessary in the proposed method. An extensible feature-based 3D-2D registration framework is proposed by rigorously formulating the registration as a probability density estimation problem and solving it via a generalized expectation maximization algorithm. It optimizes the transformation directly and treats correspondences as nuisance parameters. This is signi_cantly di_erent from almost all feature-based method in the literature that _rst single out a set of \best" correspondences and then estimate a transformation associated with it. This property makes the proposed algorithm not rely on paired correspondences and thus inherently robust to outliers. The framework can be adapted as a point-based method with the major advantages of 1) independency on paired correspondences, 2) accurate registration using a single image, and 3) robustness to the initialization and a large amount of outliers. Extended to a contour-based method, it di_ers from other contour-based methods mainly in that 1) it does not rely on correspondences and 2) it incorporates gradient information via a statistical model instead of a weighting function. Tuning into model-based deformable registration and surface reconstruction, our method solves the problem using the maximum penalized likelihood estimation. Unlike almost all other methods that handle the registration and deformation separately and optimized them sequentially, our method optimizes them simultaneously. The framework was evaluated in two example clinical applications and a simulation study for point-based, contour-based and surface reconstruction, respectively. Experiments showed its sub-degree and sub-millimeter registration accuracy and superiority to the state-of-the-art methods. It is expected that our algorithms, when thoroughly validated, can be used as valuable tools for image-guided interventions.
published_or_final_version
Orthopaedics and Traumatology
Doctoral
Doctor of Philosophy
Styles APA, Harvard, Vancouver, ISO, etc.
47

Boui, Marouane. « Détection et suivi de personnes par vision omnidirectionnelle : approche 2D et 3D ». Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE009/document.

Texte intégral
Résumé :
Dans cette thèse, nous traiterons du problème de la détection et du suivi 3D de personnes dans des séquences d'images omnidirectionnelles, dans le but de réaliser des applications permettant l'estimation de pose 3D. Ceci nécessite, la mise en place d'un suivi stable et précis de la personne dans un environnement réel. Dans le cadre de cette étude, on utilisera une caméra catadioptrique composée d'un miroir sphérique et d'une caméra perspective. Ce type de capteur est couramment utilisé dans la vision par ordinateur et la robotique. Son principal avantage est son large champ de vision qui lui permet d'acquérir une vue à 360 degrés de la scène avec un seul capteur et en une seule image. Cependant, ce capteur va engendrer des distorsions importantes dans les images, ne permettant pas une application directe des méthodes classiquement utilisées en vision perspective. Cette thèse traite de deux approches de suivi développées durant cette thèse, qui permettent de tenir compte de ces distorsions. Elles illustrent le cheminement suivi par nos travaux, nous permettant de passer de la détection de personne à l'estimation 3D de sa pose. La première étape de nos travaux a consisté à mettre en place un algorithme de détection de personnes dans les images omnidirectionnelles. Nous avons proposé d'étendre l'approche conventionnelle pour la détection humaine en image perspective, basée sur l'Histogramme Orientés du Gradient (HOG), pour l'adapter à des images sphériques. Notre approche utilise les variétés riemanniennes afin d'adapter le calcul du gradient dans le cas des images omnidirectionnelles. Elle utilise aussi le gradient sphérique pour le cas les images sphériques afin de générer notre descripteur d'image omnidirectionnelle. Par la suite, nous nous sommes concentrés sur la mise en place d'un système de suivi 3D de personnes avec des caméras omnidirectionnelles. Nous avons fait le choix de faire du suivi 3D basé sur un modèle de la personne avec 30 degrés de liberté car nous nous sommes imposés comme contrainte l'utilisation d'une seule caméra catadioptrique
In this thesis we will handle the problem of 3D people detection and tracking in omnidirectional images sequences, in order to realize applications allowing3D pose estimation, we investigate the problem of 3D people detection and tracking in omnidirectional images sequences. This requires a stable and accurate monitoring of the person in a real environment. In order to achieve this, we will use a catadioptric camera composed of a spherical mirror and a perspective camera. This type of sensor is commonly used in computer vision and robotics. Its main advantage is its wide field of vision, which allows it to acquire a 360-degree view of the scene with a single sensor and in a single image. However, this kind of sensor generally generates significant distortions in the images, not allowing a direct application of the methods conventionally used in perspective vision. Our thesis contains a description of two monitoring approaches that take into account these distortions. These methods show the progress of our work during these three years, allowing us to move from person detection to the 3Destimation of its pose. The first step of this work consisted in setting up a person detection algorithm in the omnidirectional images. We proposed to extend the conventional approach for human detection in perspective image, based on the Gradient-Oriented Histogram (HOG), in order to adjust it to spherical images. Our approach uses the Riemannian varieties to adapt the gradient calculation for omnidirectional images as well as the spherical gradient for spherical images to generate our omnidirectional image descriptor
Styles APA, Harvard, Vancouver, ISO, etc.
48

Kim, Jae-Seung. « Objective image quality assessment for positron emission tomography : planar (2D) and volumetric (3D) human and model observer studies / ». Thesis, Connect to this title online ; UW restricted, 2003. http://hdl.handle.net/1773/5836.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Chen, Chun-Wei, et 陳俊維. « 3D Active Appearance Models for Aligning Faces in 2D Images ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/60979024048802680992.

Texte intégral
Résumé :
碩士
國立臺灣大學
資訊工程學研究所
96
Perceiving human faces is one of the most important functions for human robot interaction. The active appearance model (AAM) is a statistical approach that models the shape and texture of a target object. According to a number of the existing works, AAM has a great success in modeling human faces. Unfortunately, the traditional AAM framework could fail when the face pose changes as only 2D information is used to model a 3D object. To overcome this limitation, we propose a 3D AAM framework in which a 3D shape model and an appearance model are used to model human faces. Instead of choosing a proper weighting constant to balance the contributions from appearance similarity and the constraint on consistent 2D shape with 3D shape in the existing work, our approach directly matches 2D visual faces with the 3D shape model. No balancing weighting between 2D shape and 3D shape is needed. In addition, only frontal faces are needed for training and non-frontal faces can be aligned successfully. The experimental results with 20 subjects demonstrate the effectiveness of the proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Cheng, Yong-Qing. « Acquisition of 3D models from a set of 2D images ». 1997. https://scholarworks.umass.edu/dissertations/AAI9809318.

Texte intégral
Résumé :
The acquisition of accurate 3D models from a set of images is an important and difficult problem in computer vision. The general problems considered in this thesis are how to compute the camera parameters and build 3D models given a set of 2D images. The first set of algorithms presented in this thesis deal with the problem of camera calibration in which some or all of the camera parameters must be determined. A new analytical technique is derived to find relative camera poses for three images, given only calibrated 2D image line correspondences across three images. Then, a general non-linear algorithm is developed to estimate relative camera poses over a set of images. Finally, the presented algorithms are extended to simultaneously compute the intrinsic camera parameters and relative camera poses from 2D image line correspondences over multiple uncalibrated images. To reconstruct and refine 3D lines of the models, a multi-image and multi-line triangulation method using known correspondences is presented. A novel non-iterative line reconstruction algorithm is proposed. Then, a robust algorithm is presented to simultaneously estimate a model consisting of a set of 3D lines while satisfying object-level constraints such as angular, coplanar, and other geometric 3D constraints. Finally, to make the proposed approach widely applicable, an integrated approach to matching and triangulation from noisy 2D image points across two images is first presented by introducing an affinity measure between image point features, based on their distance from a hypothetical projected 3D pseudo-intersection point. A similar approach to matching and triangulation from noisy 2D image line segments across three images is proposed by introducing an affinity measure among 2D image line segments via a 3D pseudo-intersection line.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie