Dissertations / Theses on the topic 'Images 2D'

To see the other types of publications on this topic, follow the link: Images 2D.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Images 2D.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Truong, Michael Vi Nguyen. "2D-3D registration of cardiac images." Thesis, King's College London (University of London), 2014. https://kclpure.kcl.ac.uk/portal/en/theses/2d3d-registration-of-cardiac-images(afef93e6-228c-4bc7-aab0-94f1e1ecf006).html.

Full text
Abstract:
This thesis describes two novel catheter-based 2D-3D cardiac image registration algorithms for overlaying preoperative 3D MR or CT data onto intraoperative fluoroscopy, and fusing electroanatomical data onto clinical images. The work is intended for use in cardiac catheterisation procedures. To fulfil this objective, the algorithms must be accurate, robust and minimally disruptive to the clinical workflow. The first algorithm relies on the catheterisation of vessels of the heart and registers by minimising a vessel-radius-weighted distance between the catheters and corresponding vessel centrelines. A novelty here is a global-fit search strategy that considers all vessel branches during registration, adding robustness and avoiding manual branch selection. Another contribution to knowledge is an analysis of catheter configurations for registration. Results show that accuracy is highly dependent on the catheter configuration, and that using a coronary vessel (CV) with the aorta (Ao) was most accurate, yielding mean 3D target registration errors (TRE) between 0.55 and 7.0 mm with phantom data. Using two large-diameter vessels was least accurate, with TRE between 10 and 43 mm, and should be avoided. When applied to clinical data, registrations with the CV/Ao configuration resulted an estimated mean 2D-TRE of 5.9 mm, on average. The second 2D-3D registration algorithm extends the novelty of exploring catheter configurations by registering using catheters looped inside chambers of the heart. In phantom experiments, two-view registration yielded an average accuracy of 4.0 mm 3D-TRE (7.8-mm capture range). Using a single view, average reprojection distance was 2.7 mm (6.0-mm capture range). Application of the algorithm to a clinical dataset resulted in an estimated average 2D-TRE of 10 mm. Single view registrations are ideal when biplane X-ray acquisition is undesirable and for correcting bulk patient motion. In current practice, registration is performed manually. The algorithms in this thesis can register with comparable accuracy to manual registration, but are automated and can therefore fit better with the clinical workflow.
APA, Harvard, Vancouver, ISO, and other styles
2

Jones, Jonathan-Lee. "2D and 3D segmentation of medical images." Thesis, Swansea University, 2015. https://cronfa.swan.ac.uk/Record/cronfa42504.

Full text
Abstract:
Cardiovascular disease is one of the leading causes of the morbidity and mortality in the western world today. Many different imaging modalities are in place today to diagnose and investigate cardiovascular diseases. Each of these, however, has strengths and weaknesses. There are different forms of noise and artifacts in each image modality that combine to make the field of medical image analysis both important and challenging. The aim of this thesis is develop a reliable method for segmentation of vessel structures in medical imaging, combining the expert knowledge of the user in such a way as to maintain efficiency whilst overcoming the inherent noise and artifacts present in the images. We present results from 2D segmentation techniques using different methodologies, before developing 3D techniques for segmenting vessel shape from a series of images. The main drive of the work involves the investigation of medical images obtained using catheter based techniques, namely Intra Vascular Ultrasound (IVUS) and Optical Coherence Tomography (OCT). We will present a robust segmentation paradigm, combining both edge and region information to segment the media-adventitia, and lumenal borders in those modalities respectively. By using a semi-interactive method that utilizes "soft" constraints, allowing imprecise user input which provides a balance between using the user's expert knowledge and efficiency. In the later part of the work, we develop automatic methods for segmenting the walls of lymph vessels. These methods are employed on sequential images in order to obtain data to reconstruct the vessel walls in the region of the lymph valves. We investigated methods to segment the vessel walls both individually and simultaneously, and compared the results both quantitatively and qualitatively in order obtain the most appropriate for the 3D reconstruction of the vessel wall. Lastly, we adapt the semi-interactive method used on vessels earlier into 3D to help segment out the lymph valve. This involved the user interactive method to provide guidance to help segment the boundary of the lymph vessel, then we apply a minimal surface segmentation methodology to provide segmentation of the valve.
APA, Harvard, Vancouver, ISO, and other styles
3

Guarnera, Giuseppe Claudio. "Shape Modeling and Description from 2D Images." Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1365.

Full text
Abstract:
L'abilità di vedere degli umani e degli animali è il risultato di una complessa interazione della luce con gli occhi ed il cervello. Non siamo coscienti di quanto estremamente complessa sia l'analisi delle forme degli oggetti che viene svolta dal nostro cervello, poiché questa viene prevalentemente svolta a livello subconscio, senza la necessità di richiedere l'intervento di più elevato livelli cognitivi. Pertanto, sebbene "vedere e comprendere" sembri semplice e naturale, la realizzazione di un sistema di Computer Vision versatile e robusto è un compito difficile. Nell'era dei computer, il tentativo di imitare l'abilità umana di comprendere le forme ha portato alla nascita dei campi della Computer Vision e Pattern Recognition, motivato da importanti applicazioni in diversi campi. Coerentemente con la grande varietà di applicazioni, esiste un ampio spettro di possibili "occhi" che permettono ad un computer di "vedere", molto diversi dall'occhio umano (apparecchiature per tomografia, sensori ultrasonici, ecc.). Il ruolo predominante della forma degli oggetti, rispetto alle altre caratteristiche visuali, verrà enfatizzato nel corso di questa dissertazione, mostrando come tale caratteristica può essere utilizzata per risolvere una vasta gamma di problemi aperti nei campi della Computer Vision, Pattern Recognition e Computer Graphics. In quasi tutti i casi analizzati i dati in input sono costituiti da immagini bidimensionali, dimostrando che queste ultime contengo una quantità sufficiente di informazioni sulla forma degli oggetti raffigurati. I dispositivi tramite i quali le immagini sono state acquisite spaziano dalle Digital Still Cameras sino ai dispositivi per Risonanza Magnetica, con notevoli differenze quindi sia nelle tecnologie che nella qualità delle immagini prodotte. A partire da tali dati di input, nel corso di questa dissertazione verrà mostrato come modellare accuratamente la superficie 3D di un oggetto a partire da una analisi della polarizzazione della luce riflessa o come parametrizzare le forme usando dei descrittori di forma allo stato dell'arte, basati su proprietà statistiche delle classi di oggetti o semplicemente sulle singole superfici 3D o contorni degli oggetti.
APA, Harvard, Vancouver, ISO, and other styles
4

Sdiri, Bilel. "2D/3D Endoscopic image enhancement and analysis for video guided surgery." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD030.

Full text
Abstract:
Grâce à l’évolution des procédés de diagnostiques médicaux et les développements technologiques, la chirurgie mini-invasive a fait des progrès remarquables au cours des dernières décennies surtout avec l’innovation de nouveaux outils médicaux tels que les systèmes chirurgicaux robotisés et les caméras endoscopiques sans fil. Cependant, ces techniques souffrent de quelques limitations liées essentiellement l’environnement endoscopique telles que la non uniformité de l’éclairage, les réflexions spéculaires des tissus humides, le faible contraste/netteté et le flou dû aux mouvements du chirurgien et du patient (i.e. la respiration). La correction de ces dégradations repose sur des critères de qualité d’image subjective et objective dans le contexte médical. Il est primordial de développer des solutions d’amélioration de la qualité perceptuelle des images acquises par endoscopie 3D. Ces solutions peuvent servir plus particulièrement dans l’étape d’extraction de points d’intérêts pour la reconstruction 3D des organes, qui sert à la planification de certaines opérations chirurgicales. C’est dans cette optique que cette thèse aborde le problème de la qualité des images endoscopiques en proposant de nouvelles méthodes d’analyse et de rehaussement de contraste des images endoscopiques 2D et 3D.Pour la détection et la classification automatique des anomalies tissulaires pour le diagnostic des maladies du tractus gastro-intestinal, nous avons proposé une méthode de rehaussement de contraste local et global des images endoscopiques 2D classiques et pour l’endoscopie capsulaire sans fil.La méthode proposée améliore la visibilité des structures locales fines et des détails de tissus. Ce prétraitement a permis de faciliter le processus de détection des points caractéristiques et d’améliorer le taux de classification automatique des tissus néoplasiques et tumeurs bénignes. Les méthodes développées exploitent également la propriété d’attention visuelle et de perception de relief en stéréovision. Dans ce contexte, nous avons proposé une technique adaptative d’amélioration de la qualité des images stéréo endoscopiques combinant l’information de profondeur et les contours des tissues. Pour rendre la méthode plus efficace et adaptée aux images 3Dl e rehaussement de contraste est ajusté en fonction des caractéristiques locales de l’image et du niveau de profondeur dans la scène tout en contrôlant le traitement inter-vues par un modèle de perception binoculaire.Un test subjectif a été mené pour évaluer la performance de l’algorithme proposé en termes de qualité visuelle des images générées par des observateurs experts et non experts dont les scores ont démontré l’efficacité de notre technique 3D d’amélioration du contraste. Dans cette même optique,nous avons développé une autre technique de rehaussement du contraste des images endoscopiques stéréo basée sur la décomposition en ondelettes.Ce qui offre la possibilité d’effectuer un traitement multi-échelle et d’opérer une traitement sélectif. Le schéma proposé repose sur un traitement stéréo qui exploite à la fois l’informations de profondeur et les redondances intervues,ainsi que certaines propriétés du système visuel humain, notamment la sensibilité au contraste et à la rivalité/combinaison binoculaire. La qualité visuelle des images traitées et les mesures de qualité objective démontrent l’efficacité de notre méthode qui ajuste l’éclairage des images dans les régions sombres et saturées et accentue la visibilité des détails liés aux vaisseaux sanguins et les textures de tissues
Minimally invasive surgery has made remarkable progress in the last decades and became a very popular diagnosis and treatment tool, especially with the rapid medical and technological advances leading to innovative new tools such as robotic surgical systems and wireless capsule endoscopy. Due to the intrinsic characteristics of the endoscopic environment including dynamic illumination conditions and moist tissues with high reflectance, endoscopic images suffer often from several degradations such as large dark regions,with low contrast and sharpness, and many artifacts such as specular reflections and blur. These challenges together with the introduction of three dimensional(3D) imaging surgical systems have prompted the question of endoscopic images quality, which needs to be enhanced. The latter process aims either to provide the surgeons/doctors with a better visual feedback or improve the outcomes of some subsequent tasks such as features extraction for 3D organ reconstruction and registration. This thesis addresses the problem of endoscopic image quality enhancement by proposing novel enhancement techniques for both two-dimensional (2D) and stereo (i.e. 3D)endoscopic images.In the context of automatic tissue abnormality detection and classification for gastro-intestinal tract disease diagnosis, we proposed a pre-processing enhancement method for 2D endoscopic images and wireless capsule endoscopy improving both local and global contrast. The proposed method expose inner subtle structures and tissues details, which improves the features detection process and the automatic classification rate of neoplastic,non-neoplastic and inflammatory tissues. Inspired by binocular vision attention features of the human visual system, we proposed in another workan adaptive enhancement technique for stereo endoscopic images combining depth and edginess information. The adaptability of the proposed method consists in adjusting the enhancement to both local image activity and depth level within the scene while controlling the interview difference using abinocular perception model. A subjective experiment was conducted to evaluate the performance of the proposed algorithm in terms of visual qualityby both expert and non-expert observers whose scores demonstrated the efficiency of our 3D contrast enhancement technique. In the same scope, we resort in another recent stereo endoscopic image enhancement work to the wavelet domain to target the enhancement towards specific image components using the multiscale representation and the efficient space-frequency localization property. The proposed joint enhancement methods rely on cross-view processing and depth information, for both the wavelet decomposition and the enhancement steps, to exploit the inter-view redundancies together with perceptual human visual system properties related to contrast sensitivity and binocular combination and rivalry. The visual qualityof the processed images and objective assessment metrics demonstrate the efficiency of our joint stereo enhancement in adjusting the image illuminationin both dark and saturated regions and emphasizing local image details such as fine veins and micro vessels, compared to other endoscopic enhancement techniques for 2D and 3D images
APA, Harvard, Vancouver, ISO, and other styles
5

Meng, Ting, and Yating Yu. "Deconvolution algorithms of 2D Transmission Electron Microscopy images." Thesis, KTH, Optimeringslära och systemteori, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-110096.

Full text
Abstract:
The purpose of this thesis is to develop a mathematical approach and associated software implementation for deconvolution of two-dimensional Transmission Electron Microscope (TEM) images. The focus is on TEM images of weakly scattering amorphous biological specimens that mainly produce phase contrast. The deconvolution is to remove the distortions introduced by the TEM detector that are modeled by the Modulation Transfer Function (MTF). The report tests deconvolution of the TEM detector MTF by Wiener _ltering and Tikhonov regularization on a range of simulated TEM images with varying degree of noise.The performance of the two deconvolution methods are quanti_ed by means of Figure of Merits (FOMs) and comparison in-between methods is based on statistical analysis of the FOMs.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Hui. "Efficient reconstruction of 2D images and 3D surfaces." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2821.

Full text
Abstract:
The goal of this thesis is to gain a deep understanding of inverse problems arising from 2D image and 3D surface reconstruction, and to design effective techniques for solving them. Both computational and theoretical issues are studied and efficient numerical algorithms are proposed. The first part of this thesis is concerned with the recovery of 2D images, e.g., de-noising and de-blurring. We first consider implicit methods that involve solving linear systems at each iteration. An adaptive Huber regularization functional is used to select the most reasonable model and a global convergence result for lagged diffusivity is proved. Two mechanisms---multilevel continuation and multigrid preconditioning---are proposed to improve efficiency for large-scale problems. Next, explicit methods involving the construction of an artificial time-dependent differential equation model followed by forward Euler discretization are analyzed. A rapid, adaptive scheme is then proposed, and additional hybrid algorithms are designed to improve the quality of such processes. We also devise methods for more challenging cases, such as recapturing texture from a noisy input and de-blurring an image in the presence of significant noise. It is well-known that extending image processing methods to 3D triangular surface meshes is far from trivial or automatic. In the second part of this thesis we discuss techniques for faithfully reconstructing such surface models with different features. Some models contain a lot of small yet visually meaningful details, and typically require very fine meshes to represent them well; others consist of large flat regions, long sharp edges (creases) and distinct corners, and the meshes required for their representation can often be much coarser. All of these models may be sampled very irregularly. For models of the first class, we methodically develop a fast multiscale anisotropic Laplacian (MSAL) smoothing algorithm. To reconstruct a piecewise smooth CAD-like model in the second class, we design an efficient hybrid algorithm based on specific vertex classification, which combines K-means clustering and geometric a priori information. Hence, we have a set of algorithms that efficiently handle smoothing and regularization of meshes large and small in a variety of situations.
APA, Harvard, Vancouver, ISO, and other styles
7

Henrichsen, Arne. "3D reconstruction and camera calibration from 2D images." Master's thesis, University of Cape Town, 2000. http://hdl.handle.net/11427/9725.

Full text
Abstract:
Includes bibliographical references.
A 3D reconstruction technique from stereo images is presented that needs minimal intervention from the user. The reconstruction problem consists of three steps, each of which is equivalent to the estimation of a specific geometry group. The first step is the estimation of the epipolar geometry that exists between the stereo image pair, a process involving feature matching in both images. The second step estimates the affine geometry, a process of finding a special plane in projective space by means of vanishing points. Camera calibration forms part of the third step in obtaining the metric geometry, from which it is possible to obtain a 3D model of the scene. The advantage of this system is that the stereo images do not need to be calibrated in order to obtain a reconstruction. Results for both the camera calibration and reconstruction are presented to verify that it is possible to obtain a 3D model directly from features in the images.
APA, Harvard, Vancouver, ISO, and other styles
8

Agerskov, Niels, and Gabriel Carrizo. "Application for Deriving 2D Images from 3D CT Image Data for Research Purposes." Thesis, KTH, Skolan för teknik och hälsa (STH), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190881.

Full text
Abstract:
Karolinska University Hospital, Huddinge, Sweden, has long desired to plan hip prostheses with Computed Tomography (CT) scans instead of plain radiographs to save time and patient discomfort. This has not been possible previously as their current software is limited to prosthesis planning on traditional 2D X-ray images. The purpose of this project was therefore to create an application (software) that allows medical professionals to derive a 2D image from CT images that can be used for prosthesis planning. In order to create the application NumPy and The Visualization Toolkit (VTK) Python code libraries were utilised and tied together with a graphical user interface library called PyQt4. The application includes a graphical interface and methods for optimizing the images for prosthesis planning. The application was finished and serves its purpose but the quality of the images needs to be evaluated with a larger sample group.
På Karolinska universitetssjukhuset, Huddinge har man länge önskat möjligheten att utföra mallningar av höftproteser med hjälp av data från datortomografiundersökningar (DT). Detta har hittills inte varit möjligt eftersom programmet som används för mallning av höftproteser enbart accepterar traditionella slätröntgenbilder. Därför var syftet med detta projekt att skapa en mjukvaru-applikation som kan användas för att generera 2D-bilder för mallning av proteser från DT-data. För att skapa applikationen användes huvudsakligen Python-kodbiblioteken NumPy och The Visualization Toolkit (VTK) tillsammans med användargränssnittsbiblioteket PyQt4. I applikationen ingår ett grafiskt användargränssnitt och metoder för optimering av bilderna i mallningssammanhang. Applikationen fungerar men bildernas kvalitet måste utvärderas med en större urvalsgrupp.
APA, Harvard, Vancouver, ISO, and other styles
9

Srinivasan, Nirmala. "Cross-Correlation Of Biomedical Images Using Two Dimensional Discrete Hermite Functions." University of Akron / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=akron1341866987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bowden, Nathan Charles. "Camera based texture mapping: 3D applications for 2D images." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2407.

Full text
Abstract:
This artist??s area of research is the appropriate use of matte paintings within the context of completely computer generated films. The emphasis of research is the adaptation of analog techniques and paradigms into a digital production workspace. The purpose of this artist??s research is the development of an original method of parenting perspective projections to three-dimensional (3D) cameras, specifically tailored to result in 3D matte paintings. Research includes the demonstration of techniques combining two-dimensional (2D) paintings, 3D props and sets, as well as camera projections onto primitive geometry to achieve a convincing final composite.
APA, Harvard, Vancouver, ISO, and other styles
11

Chaudhary, Priyanka. "SPHEROID DETECTION IN 2D IMAGES USING CIRCULAR HOUGH TRANSFORM." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/9.

Full text
Abstract:
Three-dimensional endothelial cell sprouting assay (3D-ECSA) exhibits differentiation of endothelial cells into sprouting structures inside a 3D matrix of collagen I. It is a screening tool to study endothelial cell behavior and identification of angiogenesis inhibitors. The shape and size of an EC spheroid (aggregation of ~ 750 cells) is important with respect to its growth performance in presence of angiogenic stimulators. Apparently, tubules formed on malformed spheroids lack homogeneity in terms of density and length. This requires segregation of well formed spheroids from malformed ones to obtain better performance metrics. We aim to develop and validate an automated imaging software analysis tool, as a part of a High-content High throughput screening (HC-HTS) assay platform, to exploit 3D-ECSA as a differential HTS assay. We present a solution using Circular Hough Transform to detect a nearly perfect spheroid as per its circular shape in a 2D image. This successfully enables us to differentiate and separate good spheroids from the malformed ones using automated test bench.
APA, Harvard, Vancouver, ISO, and other styles
12

Dandu, Sai Venkata Satya Siva Kumar, and Sujit Kadimisetti. "2D SPECTRAL SUBTRACTION FOR NOISE SUPPRESSION IN FINGERPRINT IMAGES." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13848.

Full text
Abstract:
Human fingerprints are rich in details called the minutiae, which can be used as identification marks for fingerprint verification. To get the details, the fingerprint capturing techniques are to be improved. Since when we the fingerprint is captured, the noise from outside adds to it. The goal of this thesis is to remove the noise present in the fingerprint image. To achieve a good quality fingerprint image, this noise has to be removed or suppressed and here it is done by using an algorithm or technique called ’Spectral Subtraction’, where the algorithm is based on subtraction of estimated noise spectrum from noisy signal spectrum. The performance of the algorithm is assessed by comparing the original fingerprint image and image obtained after spectral subtraction several parameters like PSNR, SSIM and also for different fingerprints on the database. Finally, performance matching was done using NIST matching software, and the obtained results were presented in the form of Receiver Operating Characteristics (ROC)graphs, using MATLAB, and the experimental results were presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Le, Van Linh. "Automatic landmarking for 2D biological images : image processing with and without deep learning methods." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0238.

Full text
Abstract:
Les points de repère sont présentés dans les applications de différents domaines tels que le biomédical ou le biologique. C’est également l’un des types de données qui ont été utilisés dans différentes analyses, par exemple, ils ne sont pas seulement utilisés pour mesurer la forme de l’objet, mais également pour déterminer la similarité entre deux objets. En biologie, les repères sont utilisés pour analyser les variations inter-organismes. Cependant, l’offre de repères est très lourde et le plus souvent, ils sont fournis manuellement. Ces dernières années, plusieurs méthodes ont été proposées pour prédire automatiquement les points de repère, mais la dureté existe, car elles se sont concentrées sur des données spécifiques. Cette thèse porte sur la détermination automatique de points de repère sur des images biologiques, plus spécifiquement sur d’images 2D des coléoptères. Dans le cadre de nos recherches, nous avons collaboré avec des biologistes pour créer un ensemble de données comprenantles images de 293 coléoptères. Pour chaque coléoptère de cette donnée, 5 images corre-spondent à 5 parties prises en compte, par exemple tête, élytre, pronotum, mandibule gauche et droite. Avec chaque image, un ensemble de points de repère a été proposé manuellement par les biologistes. La première étape, nous avons apporté une méthode qui a été appliquée sur les ailes de mouche, à appliquer sur notre jeu de données dans le but de tester la pertinence des techniques de traitement d’image sur notre problème. Deuxièmement, nous avons développé une méthode en plusieurs étapes pour fournir automatiquement les points de repère sur les images. Ces deux premières étapes ont été effectuées sur les images de la mandibule qui sont considérées comme évidentes pour l’utilisation des méthodes de traitement d’images. Troisièmement, nous avons continué à considérer d’autres parties complexes restantes de coléoptères. En conséquence, nous avons utilisé l’aide de Deep Learning. Nous avons conçu un nouveau modèle de Convolutional Neural Network, nommé EB-Net, pour prédire les points de repère sur les images restantes. De plus, nous avons proposé une nouvelle procédurepour augmenter le nombre d’images dans notre jeu de données, ce qui est considéré comme notre limite à appliquer Deep Learning. Enfin, pour améliorer la qualité des coordonnées prédites, nous avons utilisé Transfer Learning, une autre technique de Deep Learning. Pour ce faire, nous avons formé EB-Net sur les points clés publics du visage. Ensuite, ils ont été transférés pour affiner les images de coléoptère. Les résultats obtenus ont été discutés avec les biologistes et ils ont confirmé que la qualité des repéres prédits est suffisamment bonne sur la plane statistique pour remplacer les repères manuels pour la plupart des analyses de morphométrie différentes
Landmarks are presented in the applications of different domains such as biomedical or biological. It is also one of the data types which have been usedin different analysis, for example, they are not only used for measuring the form of the object, but also for determining the similarity between two objects. In biology, landmarks are used to analyze the inter-organisms variations, however the supply of landmarks is very heavy and most often they are provided manually. In recent years, several methods have been proposed to automatically predict landmarks, but it is existing the hardness because these methods focused on the specific data. This thesis focuses on automatic determination of landmarks on biological images, more specifically on two-dimensional images of beetles. In our research, we have collaborated with biologists to build a dataset including the images of 293 beetles. For each beetle in this dataset, 5 images correspond to 5 parts have been taken into account, e.g., head, body, pronotum, left and right mandible. Along with each image, a set of landmarks has been manually proposed by biologists. First step, we have brought a method whichwas applied on fly wings, to apply on our dataset with the aim to test the suitability of image processing techniques on our problem. Secondly, we have developed a method consisting of several stages to automatically provide the landmarks on the images.These two first steps have been done on the mandible images which are considered as obvious to use the image processing methods. Thirdly, we have continued to consider other complex remaining parts of beetles. Accordingly, we have used the help of Deep Learning. We have designed a new model of Convolutional Neural Network, named EB-Net, to predict the landmarks on remaining images. In addition, we have proposed a new procedure to augment the number of images in our dataset, which is seen as our limitation to apply deep learning. Finally, to improve the quality of predicted coordinates, we have employed Transfer Learning, another technique of Deep Learning. In order to do that, we trained EB-Net on a public facial keypoints. Then, they were transferred to fine-tuning on beetle’s images. The obtained results have been discussed with biologists, and they have confirmed that the quality of predicted landmarks is statistically good enough to replace the manual landmarks for most of the different morphometry analysis
APA, Harvard, Vancouver, ISO, and other styles
14

Grandi, Jerônimo Gustavo. "Multidimensional similarity search for 2D-3D medical data correlation and fusion." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/104133.

Full text
Abstract:
Imagens da anatomia interna são essenciais para as práticas médicas. Estabelecer correlação entre elas, é um importante procedimento para diagnóstico e tratamento. Nessa dissertação, é proposta uma abordagem para correlacionar dados multidimensionais de mesma modalidade de aquisição baseando-se somente nas informações de intensidade de pixels e voxels. O trabalho foi dividido em duas fases de implementação. Na primeira, foi explorado o problema de similaridade entre imagens médicas usando a perspectiva de análise de qualidade de imagem. Isso levou ao desenvolvimento de uma técnica de dois passos que estabelece um equilíbrio entre a velocidade de processamento e precisão de duas abordagens conhecidas. Avaliou-se a qualidade e aplicabilidade do algoritmo e, na segunda fase, o método foi estendido para analisar similaridade e encontrar a localização de uma imagem arbitrária (2D) em um volume (3D). A solução minimiza o número virtualmente infinito de possíveis orientações transversais e usa otimizações para reduzir a carga de trabalho e entregar resultados precisos. Uma visualização tridimensional volumétrica funde o volume (3D) com a imagem (2D) estabelecendo uma correspondência entre os dados. Uma análise experimental demonstrou que, apesar da complexidade computacional do algoritmo, o uso de amostragem, tanto na imagem quanto no volume, permite alcançar um bom equilíbrio entre desempenho e precisão, mesmo quando realizada com conjuntos de dados de baixa intensidade de gradiente.
Images of the inner anatomy are essential for clinical practice. To establish a correlation between them is an important procedure for diagnosis and treatment. In this thesis, we propose an approach to correlate within-modality 2D and 3D data from ordinary acquisition protocols based solely on the pixel/voxel information. The work was divided into two development phases. First, we explored the similarity problem between medical images using the perspective of image quality assessment. It led to the development of a 2-step technique that settles the compromise between processing speed and precision of two known approaches. We evaluated the quality and applicability of the 2-step and, in the second phase, we extended the method to use similarity analysis to, given an arbitrary slice image (2D), find the location of this slice within the volume data (3D). The solution minimizes the virtually infinite number of possible cross section orientations and uses optimizations to reduce the computational workload and output accurate results. The matching is displayed in a volumetric three-dimensional visualization fusing the 3D with the 2D. An experimental analysis demonstrated that despite the computational complexity of the algorithm, the use of severe data sampling allows achieving a great compromise between performance and accuracy even when performed with low gradient intensity datasets.
APA, Harvard, Vancouver, ISO, and other styles
15

ARMANDE, NASSER. "Caracterisation de reseaux fins dans les images 2d et 3d applications : images satellites et medicales." Paris 11, 1997. http://www.theses.fr/1997PA112094.

Full text
Abstract:
Les travaux effectues dans cette these se situent dans la phase de segmentation d'un systeme de vision par ordinateur. La segmentation vise a representer de maniere compacte les informations utiles et pertinentes dans l'image. Afin d'assurer un bon deroulement de cette phase, differents outils mathematiques peuvent etre utilises. La geometrie differentielle et en particulier les proprietes differentielles des surfaces parametrees, ont ete utilisees par de nombreux chercheurs pour la segmentation et le traitement bas niveau des images, et constituent les outils mathematiques de base de nos travaux de recherche. Elles permettent d'une maniere efficace et formelle de resoudre de nombreux problemes lies a la caracterisation et l'extraction de differents types de primitives et d'indice visuels dans les images a niveau de gris. Le travail original realise dans cette these constitue un ensemble d'approches fondees sur les proprietes differentielles des surfaces parametrees. Ces proprietes sont utilisees pour caracteriser et extraire les structures fines, appelees reseaux fins, dans l'image. Notre methodologie consiste a utiliser la representation de l'image sous forme de surface, dite surface image, afin de mettre en evidence un certain nombre de proprietes differentielles de cette derniere. Nous montrons que, parmi ces proprietes, les courbures principales et les directions associees de la surface image permettent de caracteriser et d'extraire les reseaux fins 2d. Une etude de comportement de ces structures dans l'espace echelle est realisee afin de mettre en relation la largeur du reseau fin et l'echelle du traitement. Ce qui conduit a la detection de reseaux fins 2d de differentes largeurs. Nous montrons egalement qu'une simple extension de l'approche en 3d garantit l'extraction de ce type de structure (reseaux fins 3d) dans les images 3d. La recherche de reseaux fins est effectuee sur les images satellites et medicales pour lesquelles ces structures correspondent respectivement aux reseaux routiers et aux vaisseaux sanguins.
APA, Harvard, Vancouver, ISO, and other styles
16

Allouch, Yair. "Multi scale geometric segmentation on 2D and 3D Digital Images /." [Beer Sheva] : Ben Gurion University of the Negev, 2007. http://aranne5.lib.ad.bgu.ac.il/others/AlloucheYair.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gadsby, David. "Object recognition for threat detection from 2D X-ray images." Thesis, Manchester Metropolitan University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493851.

Full text
Abstract:
This thesis examines methods to identify threat objects inside airport handheld passenger baggage. The work presents techniques for the enhancement and classification of objects from 2-dimensional x-ray images. It has been conducted with the collaboration of Manchester Aviation Services and uses test images from real x-ray baggage machines. The research attempts to overcome the key problem of object occlusion that impedes the performance of x-ray baggage operators identifying threat objects such as guns and knifes in x-ray images. Object occlusions can hide key information on the appearance of an object and potentially lead to a threat item entering an aircraft.
APA, Harvard, Vancouver, ISO, and other styles
18

Dowell, Rachel J. (Rachel Jean). "Registration of 2D ultrasound images in preparation for 3D reconstruction." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cheng, Yuan 1971. "3D reconstruction from 2D images and applications to cell cytoskeleton." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88870.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, February 2001.
Includes bibliographical references (leaves 121-129).
Approaches to achieve three dimensional (3D) reconstruction from 2D images can be grouped into two categories: computer-vision-based reconstruction and tomographic reconstruction. By exploring both the differences and connections between these two types of reconstruction, the thesis attempts to develop a new technique that can be applied to 3D reconstruction of biological structures. Specific attention is given to the reconstruction of the cell cytoskeleton from electron microscope images. The thesis is composed of two parts. The first part studies computer-vision-based reconstruction methods that extract 3D information from geometric relationship among images. First, a multiple-feature-based stereo reconstruction algorithm that recovers the 3D structure of an object from two images is presented. A volumetric reconstruction method is then developed by extending the algorithm to multiple images. The method integrates a sequence of 3D reconstruction from different stereo pairs. It achieves a globally optimized reconstruction by evaluating certainty values of each stereo reconstruction. This method is tuned and applied to 3D reconstruction of the cell cytoskeleton. Feasibility, reliability and flexibility of the method are explored.
(cont.) The second part of the thesis focuses on a special tomographic reconstruction, discrete tomography, where the object to be reconstructed is composed of a discrete set of materials each with uniform values. A Bayesian labeling process is proposed as a framework for discrete tomography. The process uses an expectation-maximization (EM) algorithm with which the reconstruction is obtained efficiently. Results demonstrate that the proposed algorithm achieves high reconstruction quality even with a small number of projections. An interesting relationship between discrete tomography and conventional tomography is also derived, showing that discrete tomography is a more generalized form of tomography and conventional tomography is only a special case of such generalization.
by Yuan Cheng.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Mertzanidou, T. "Automatic correspondence between 2D and 3D images of the breast." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1362435/.

Full text
Abstract:
Radiologists often need to localise corresponding findings in different images of the breast, such as Magnetic Resonance Images and X-ray mammograms. However, this is a difficult task, as one is a volume and the other a projection image. In addition, the appearance of breast tissue structure can vary significantly between them. Some breast regions are often obscured in an X-ray, due to its projective nature and the superimposition of normal glandular tissue. Automatically determining correspondences between the two modalities could assist radiologists in the detection, diagnosis and surgical planning of breast cancer. This thesis addresses the problems associated with the automatic alignment of 3D and 2D breast images and presents a generic framework for registration that uses the structures within the breast for alignment, rather than surrogates based on the breast outline or nipple position. The proposed algorithm can adapt to incorporate different types of transformation models, in order to capture the breast deformation between modalities. The framework was validated on clinical MRI and X-ray mammography cases using both simple geometrical models, such as the affine, and also more complex ones that are based on biomechanical simulations. The results showed that the proposed framework with the affine transformation model can provide clinically useful accuracy (13.1mm when tested on 113 registration tasks). The biomechanical transformation models provided further improvement when applied on a smaller dataset. Our technique was also tested on determining corresponding findings in multiple X-ray images (i.e. temporal or CC to MLO) for a given subject using the 3D information provided by the MRI. Quantitative results showed that this approach outperforms 2D transformation models that are typically used for this task. The results indicate that this pipeline has the potential to provide a clinically useful tool for radiologists.
APA, Harvard, Vancouver, ISO, and other styles
21

Ngo, Hoai Diem Phuc. "Rigid transformations on 2D digital images : combinatorial and topological analysis." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1091/document.

Full text
Abstract:
Dans cette thèse, nous étudions les transformations rigides dans le contexte de l'imagerie numérique. En particulier, nous développons un cadre purement discret pour traiter ces transformations. Les transformations rigides, initialement définies dans le domaine continu, sont impliquées dans de nombreuses applications de traitement d'images numériques. Dans ce contexte, les transformations rigides digitales induites présentent des propriétés géométriques et topologiques différentes par rapport à leurs analogues continues. Afin de s'affranchir des problèmes inhérents à ces différences, nous proposons de formuler ces transformations rigides dans un cadre purement discret. Dans ce cadre, les transformations rigides sont regroupées en classes correspondant chacune à une transformation digitale donnée. De plus, les relations entre ces classes de transformations peuvent être modélisées par une structure de graphe. Nous prouvons que ce graphe présente une complexité spatiale polynômiale par rapport à la taille de l'image. Il présente également des propriétés structurelles intéressantes. En particulier, il permet de générer de manière progressive toute transformation rigide digitale, et ce sans approximation numérique. Cette structure constitue un outil théorique pour l'étude des relations entre la géométrie et la topologie dans le contexte de l'imagerie numérique. Elle présente aussi un intérêt méthodologique, comme l'illustre son utilisation pour l'évaluation du comportement topologique des images sous des transformations rigides
In this thesis, we study rigid transformations in the context of computer imagery. In particular, we develop a fully discrete framework for handling such transformations. Rigid transformations, initially defined in the continuous domain, are involved in a wide range of digital image processing applications. In this context, the induced digital rigid transformations present different geometrical and topological properties with respect to their continuous analogues. In order to overcome the issues raised by these differences, we propose to formulate rigid transformations on digital images in a fully discrete framework. In this framework, Euclidean rigid transformations producing the same digital rigid transformation are put in the same equivalence class. Moreover, the relationship between these classes can be modeled as a graph structure. We prove that this graph has a polynomial space complexity with respect to the size of the considered image, and presents useful structural properties. In particular, it allows us to generate incrementally all digital rigid transformations without numerical approximation. This structure constitutes a theoretical tool to investigate the relationships between geometry and topology in the context of digital images. It is also interesting from the methodological point of view, as we illustrate by its use for assessing the topological behavior of images under rigid transformations
APA, Harvard, Vancouver, ISO, and other styles
22

Phan, Tan Binh. "On the 3D hollow organ cartography using 2D endoscopic images." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0135.

Full text
Abstract:
Les algorithmes de « Structure from motion » (SfM, structure reconstituée à l’aide du mouvement) représentent un moyen efficace de construction de surfaces 3D étendues à partir des images d'une scène acquise sous différents points de vue. Ces algorithmes déterminent simultanément le mouvement de la caméra et un nuage de points 3D se trouvant à la surface des objets à reconstruire. Les algorithmes SfM classiques utilisent des méthodes de détection et de mise en correspondance de points caractéristiques pour poursuivre les points homologues à travers les séquences d'images, chaque ensemble de points homologues correspondant à un point 3D à reconstruire. Les algorithmes SfM exploitent les correspondances entre des points homologues pour trouver la structure 3D de la scène et les poses successives de la caméra dans un repère monde arbitraire. Il existe différents algorithmes SfM de référence qui peuvent reconstruire efficacement différents types de scènes lorsque les images comportent suffisamment de textures ou de structures. Cependant, la plupart des solutions existantes ne sont pas appropriées, ou du moins pas optimales, lorsque les séquences d'images contiennent peu de textures. Cette thèse propose deux solutions de type SfM basées sur un flot optique dense pour reconstruire des scènes complexes à partir d’une séquence d’images avec peu de textures et acquises sous des conditions d'éclairage changeantes. Il est notamment montré comment un flot optique précis peut être utilisé de manière optimale grâce à une stratégie de sélection d'images qui maximise le nombre et la taille des groupes de points homologues tout en minimisant les erreurs de localisation des points homologues. La précision des méthodes de cartographie 3D est évaluée sur des fantômes avec des dimensions connues. L’intérêt et la robustesse des méthodes sont démontrés sur des scènes médicales complexes en utilisant un jeu de valeurs constantes pour les paramètres des algorithmes. Les solutions proposées ont permis de reconstruire des organes observés dans différents examens (surface épithéliale de la paroi interne de l'estomac, surface épithéliale interne de la vessie et surface de la peau en dermatologie) et dans diverses modalités (lumière blanche pour tous les examens, lumière vert-bleu en gastroscopie et fluorescence en cystoscopie)
Structure from motion (SfM) algorithms represent an efficient means to construct extended 3D surfaces using images of a scene acquired from different viewpoints. SfM methods simultaneously determine the camera motion and a 3D point cloud lying on the surfaces to be recovered. Classical SfM algorithms use feature point detection and matching methods to track homologous points across the image sequences, each point track corresponding to a 3D point to be reconstructed. The SfM algorithms exploit the correspondences between homologous points to recover the 3D scene structure and the successive camera poses in an arbitrary world coordinate system. There exist different state-of-the-art SfM algorithms which can efficiently reconstruct different types of scenes, under the condition that the images include enough textures or structures. However, most of the existing solutions are inappropriate, or at least not optimal, when the sequences of images are without or only with few textures. This thesis proposes two dense optical flow (DOF)-based SfM solutions to reconstruct complex scenes using images with few textures and acquired under changing illumination conditions. It is notably shown how accurate DOF fields can be optimally used due to an image selection strategy which both maximizes the number and size of homologous point sets, and minimizes the errors in the homologous point localization. The accuracy of the proposed 3D cartography methods is assessed on phantoms with known dimensions. The robustness and the interest of the proposed methods are demonstrated on various complex medical scenes using a constant algorithm parameter set. The proposed solutions reconstructed organs seen in different medical examinations (epithelial surface of the inner stomach wall, inner epithelial bladder surface, and the skin surface in dermatology) and various imaging modalities (white light for all examinations, green-blue light in gastroscopy and fluorescence in cystoscopy)
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Yan. "Feature-based automatic registration of images with 2D and 3D models." Thesis, University of Central Lancashire, 2006. http://clok.uclan.ac.uk/21603/.

Full text
Abstract:
Automatic image registration is the technique to align images in different coordinate systems to the same coordinate system which has found wide industrial applications for control automation and quality inspection. Focusing on the industrial applications where product models are available and transformations between models and images are global, this thesis presents the research works on two registration problems based on different features and different transformation models. The first image registration problem is a 2D/2D one with a 2D similarity transformation and based on geometric primitives selected from models and extracted from images. Featured-based methods using geometric primitives like point, line segment and circle have been widely studied. This thesis proposes a number of novel registration methods based on elliptic features, which include a point matching algorithm based on local search method for ellipse correspondence search and rough pose estimation, a numerical approach to refine the estimation result by using the non-overlapping area ratio (NAR) of corresponding ellipses and an elliptic are matching algorithm based on integral of squared distances (JSD) between points on corresponding arcs. The major advantage of JSD is that its optimal solution can be obtained analytically, which makes it applicable to efficient elliptic arc correspondence search. The second image registration problem is a 3D/2D one with an orthographic projection transformation and based on silhouette features. A novel algorithm has been developed and presented in this thesis based on a 3D triangular-mesh model, which can be applied to approximate a de facto NURBS model, and images in which silhouette features can be extracted. The algorithm consists of a rough pose estimation process with shape comparison methods and a pose refinement process with 3D/2D iterative closest point (ICP) method. The computer simulation results show that the algorithm can perform very effective and efficient 3D/2D registration.
APA, Harvard, Vancouver, ISO, and other styles
24

Lu, Ping. "Rotation Invariant Registration of 2D Aerial Images Using Local Phase Correlation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-199588.

Full text
Abstract:
Aerial image registration requires a high degree of precision. In order to improve the accuracy of feature-based registration, this project proposes a novel Log-Polar Transform (LPT) based image registration. Instead of using the whole image in the conventional method, feature points are used in this project, which reduces the computational time. For rotation invariance, it is not important how the image patch is rotated. The key is focusing on the feature points. So a circular image patch is used in this project, instead of using square image patches as used in previous methods. Existing techniques for registration with Fast Fourier Transform (FFT) always do FFT first and then Log-Polar Transformation (LPT), but it is not suitable in this project. This project does LPT first and then the FFT. The proposed process of this project contains four steps. First, feature points are selected in both  the reference image and the sensed image with corner detector (Harris or SIFT). Secondly, image patches are created using feature point positions as centers. Each point is a center point of LPT, so  circular image patches are cropped choosing a feature point as center. The radius of the circle can be changed. Then the circular images are transformed to Log-Polar coordinates. Next,  the LPT images are dealt with using phase correlation.  Experimental results demonstrate   the reliability and rotation invariance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
25

Mastin, Dana Andrew. "Statistical methods for 2D-3D registration of optical and LIDAR images." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55123.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 121-123).
Fusion of 3D laser radar (LIDAR) imagery and aerial optical imagery is an efficient method for constructing 3D virtual reality models. One difficult aspect of creating such models is registering the optical image with the LIDAR point cloud, which is a camera pose estimation problem. We propose a novel application of mutual information registration which exploits statistical dependencies in urban scenes, using variables such as LIDAR elevation, LIDAR probability of detection (pdet), and optical luminance. We employ the well known downhill simplex optimization to infer camera pose parameters. Utilization of OpenGL and graphics hardware in the optimization process yields registration times on the order of seconds. Using an initial registration comparable to GPS/INS accuracy, we demonstrate the utility of our algorithms with a collection of urban images. Our analysis begins with three basic methods for measuring mutual information. We demonstrate the utility of the mutual information measures with a series of probing experiments and registration tests. We improve the basic algorithms with a novel application of foliage detection, where the use of only non-foliage points improves registration reliability significantly. Finally, we show how the use of an existing registered optical image can be used in conjunction with foliage detection to achieve even more reliable registration.
by Dana Andrew Mastin.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Qiu, Xuchong. "2D and 3D Geometric Attributes Estimation in Images via deep learning." Thesis, Marne-la-vallée, ENPC, 2021. http://www.theses.fr/2021ENPC0005.

Full text
Abstract:
La perception visuelle d'attributs géométriques (ex. la translation, la rotation, la taille, etc.) est très importante dans les applications robotiques. Elle permet à un système robotique d'acquérir des connaissances sur son environnement et peut fournir des entrées pour des tâches telles que la localisation d'objets, la compréhension de scènes et la planification de trajectoire. Le principal objectif de cette thèse est d'estimer la position et l'orientation d'objets d'intérêt pour des tâches de manipulation robotique. En particulier, nous nous intéressons à la tâche de bas niveau d'estimation de la relation d'occultation, afin de mieux pouvoir discriminer objets différents, et aux tâches de plus haut niveau de suivi visuel d'objets et d'estimation de leur position et orientation. Le premier axe d'étude est le suivi (tracking) d'un objet d'intérêt dans une vidéo, avec des locations et tailles correctes. Tout d'abord, nous étudions attentivement le cadre du suivi d'objet basé sur des filtres de corrélation discriminants et proposons d'exploiter des informations sémantiques à deux niveaux~: l'étape d'encodage des caractéristiques visuelles et l'étape de localisation de la cible. Nos expériences démontrent que l'usage de la sémantique améliore à la fois les performances de la localisation et de l'estimation de taille de l'objet suivi. Nous effectuons également des analyses pour comprendre les cas d'échec. Le second axe d'étude est l'utilisation d'informations sur la forme des objets pour améliorer la performance de l'estimation de la pose 6D des objets et de son raffinement. Nous proposons d'estimer avec un modèle profond les projections 2D de points 3D à la surface de l'objet, afin de pouvoir calculer la pose 6D de l'objet. Nos résultats montrent que la méthode que nous proposons bénéficie du grand nombre de correspondances de points 3D à 2D et permet d'obtenir une meilleure précision des estimations. Dans un deuxième temps, nous étudions les contraintes des méthodes existantes pour raffiner la pose d'objets et développons une méthode de raffinement des objets dans des contextes arbitraires. Nos expériences montrent que nos modèles, entraînés sur des données réelles ou des données synthétiques générées, peuvent raffiner avec succès les estimations de pose pour les objets dans des contextes quelconques. Le troisième axe de recherche est l'étude de l'occultation géométrique dans des images, dans le but de mieux pouvoir distinguer les objets dans la scène. Nous formalisons d'abord la définition de l'occultation géométrique et proposons une méthode pour générer automatiquement des annotations d'occultation de haute qualité. Ensuite, nous proposons une nouvelle formulation de la relation d'occultation (abbnom) et une méthode d'inférence correspondante. Nos expériences sur les jeux de tests pour l'estimation d'occultations montrent la supériorité de notre formulation et de notre méthode. Afin de déterminer des discontinuités de profondeur précises, nous proposons également une méthode de raffinement de cartes de profondeur et une méthode monoculaire d'estimation de la profondeur en une étape. En utilisant l'estimation de relations d'occultation comme guide, ces deux méthodes atteignent les performances de l'état de l'art. Toutes les méthodes que nous proposons s'appuient sur la polyvalence et la puissance de l'apprentissage profond. Cela devrait faciliter leur intégration dans le module de perception visuelle des systèmes robotiques modernes. Outre les avancées méthodologiques mentionnées ci-dessus, nous avons également rendu publiquement disponibles des logiciels (pour l'estimation de l'occlusion et de la pose) et des jeux de données (informations de haute qualité sur les relations d'occultation) afin de contribuer aux outils offerts à la communauté scientifique
The visual perception of 2D and 3D geometric attributes (e.g. translation, rotation, spatial size and etc.) is important in robotic applications. It helps robotic system build knowledge about its surrounding environment and can serve as the input for down-stream tasks such as motion planning and physical intersection with objects.The main goal of this thesis is to automatically detect positions and poses of interested objects for robotic manipulation tasks. In particular, we are interested in the low-level task of estimating occlusion relationship to discriminate different objects and the high-level tasks of object visual tracking and object pose estimation.The first focus is to track the object of interest with correct locations and sizes in a given video. We first study systematically the tracking framework based on discriminative correlation filter (DCF) and propose to leverage semantics information in two tracking stages: the visual feature encoding stage and the target localization stage. Our experiments demonstrate that the involvement of semantics improves the performance of both localization and size estimation in our DCF-based tracking framework. We also make an analysis for failure cases.The second focus is using object shape information to improve the performance of object 6D pose estimation and do object pose refinement. We propose to estimate the 2D projections of object 3D surface points with deep models to recover object 6D poses. Our results show that the proposed method benefits from the large number of 3D-to-2D point correspondences and achieves better performance. As a second part, we study the constraints of existing object pose refinement methods and develop a pose refinement method for objects in the wild. Our experiments demonstrate that our models trained on either real data or generated synthetic data can refine pose estimates for objects in the wild, even though these objects are not seen during training.The third focus is studying geometric occlusion in single images to better discriminate objects in the scene. We first formalize geometric occlusion definition and propose a method to automatically generate high-quality occlusion annotations. Then we propose a new occlusion relationship formulation (i.e. abbnom) and the corresponding inference method. Experiments on occlusion reasoning benchmarks demonstrate the superiority of the proposed formulation and method. To recover accurate depth discontinuities, we also propose a depth map refinement method and a single-stage monocular depth estimation method.All the methods that we propose leverage on the versatility and power of deep learning. This should facilitate their integration in the visual perception module of modern robotic systems.Besides the above methodological advances, we also made available software (for occlusion and pose estimation) and datasets (of high-quality occlusion information) as a contribution to the scientific community
APA, Harvard, Vancouver, ISO, and other styles
27

Sintorn, Ida-Maria. "Segmentation methods and shape descriptions in digital images : applications in 2D and 3D microscopy /." Uppsala : Centre for Image Analysis, Swedish University of Agricultural Sciences, 2005. http://epsilon.slu.se/200520.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Karathanou, Argyro. "Image processing for on-line analysis of electron microscope images : automatic Recognition of Reconstituted Membranes." Phd thesis, Université de Haute Alsace - Mulhouse, 2009. http://tel.archives-ouvertes.fr/tel-00559800.

Full text
Abstract:
The image analysis techniques presented in the présent thesis have been developed as part of a European projeet dedicated to the development of an automatic membrane protein crystallization pipeline. A large number of samples is simultaneously produced and assessed by transmission electron microscope (TEM) screening. Automating this fast step implicates an on-fine analysis of acquired images to assure the microscope control by selecting the regions to be observed at high magnification and identify the components for specimen characterization.The observation of the sample at medium magnification provides the information that is essential to characterize the success of the 2D crystallization. The resulting objects, and especially the artificial membranes, are identifiable at this scale. These latter present only a few characteristic signatures, appearing in an extremely noisy context with gray-level fluctuations. Moreover they are practically transparent to electrons yielding low contrast. This thesis presents an ensemble of image processing techniques to analyze medium magnification images (5-15 nm/pixel). The original contribution of this work lies in: i) a statistical evaluation of contours by measuring the correlation between gray-levels of neighbouring pixels to the contour and a gradient signal for over-segmentation reduction, ii) the recognition of foreground entities of the image and iii) an initial study for their classification. This chain has been already tested on-line on a prototype and is currently evaluated.
APA, Harvard, Vancouver, ISO, and other styles
29

North, Peter R. J. "The reconstruction of visual appearance by combining stereo surfaces." Thesis, University of Sussex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chiu, Bernard. "A new segmentation algorithm for prostate boundary detection in 2D ultrasound images." Thesis, Waterloo, Ont. : University of Waterloo, [Dept. of Electrical and Computer Engineering], 2003. http://etd.uwaterloo.ca/etd/bcychiu2003.pdf.

Full text
Abstract:
Thesis (M.Sc.)--University of Waterloo, 2003.
"A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering". Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
31

Randell, Charles James. "3D underwater monocular machine vision from 2D images in an attenuating medium." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ32764.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Law, Kwok-wai Albert, and 羅國偉. "3D reconstruction of coronary artery and brain tumor from 2D medical images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31245572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chau, T. K. W. "An investigation into interpretation of 2D images using a knowledge based controller." Thesis, City University London, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zöllei, Lilla 1977. "2D-3D rigid-body registration of X-ray flourscopy and CT images." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Härd, Victoria. "Automatic Alignment of 2D Cine Morphological Images Using 4D Flow MRI Data." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131470.

Full text
Abstract:
Cardiovascular diseases are among the most common causes of death worldwide. One of the recently developed flow analysis technique called 4D flow magnetic resonance imaging (MRI) allows an early detection of such diseases. Due to the limited resolution and contrast between blood pool and myocardium of 4D flow images, cine MR images are often used for cardiac segmentation. The delineated structures are then transferred to the 4D Flow images for cardiovascular flow analysis. Cine MR images are however acquired with multiple breath-holds, which can be challenging for some people, especially, when a cardiovascular disease is present. Consequently, unexpected breathing motion by a patient may lead to misalignments between the acquired cine MR images. The goal of the thesis is to test the feasibility of an automatic image registration method to correct the misalignment caused by respiratory motion in morphological 2D cine MR images by using the 4D Flow MR as the reference image. As a registration method relies on a set of optimal parameters to provide desired results, a comprehensive investigation was performed to find such parameters. Different combinations of registration parameters settings were applied on 20 datasets from both healthy volunteers and patients. The best combinations, selected on the basis of normalized cross-correlation, were evaluated using the clinical gold-standard by employing widely used geometric measures of spatial correspondence. The accuracy of the best parameters from geometric evaluation was finally validated by using simulated misalignments. Using a registration method consisting of only translation improved the results for both datasets from healthy volunteers and patients and the simulated misalignment data. For the datasets from healthy volunteers and patients, the registration improved the results from 0.7074 ± 0.1644 to 0.7551 ± 0.0737 in Dice index and from 1.8818 ± 0.9269 to 1.5953 ± 0.5192 for point-to-curve error. These values are a mean value for all the 20 datasets. The results from geometric evaluation on the data from both healthy volunteers and patients show that the developed correction method is able to improve the alignment of the cine MR images. This allows a reliable segmentation of 4D flow MR images for cardiac flow assessment.
APA, Harvard, Vancouver, ISO, and other styles
36

Jiang, Qitong. "Euler Characteristic Transform of Shapes in 2D Digital Images as Cubical Sets." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1586387046539831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Reddy, Serendra. "Automatic 2D-to-3D conversion of single low depth-of-field images." Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24475.

Full text
Abstract:
This research presents a novel approach to the automatic rendering of 3D stereoscopic disparity image pairs from single 2D low depth-of-field (LDOF) images. Initially a depth map is produced through the assignment of depth to every delineated object and region in the image. Subsequently the left and right disparity images are produced through depth imagebased rendering (DIBR). The objects and regions in the image are initially assigned to one of six proposed groups or labels. Labelling is performed in two stages. The first involves the delineation of the dominant object-of-interest (OOI). The second involves the global object and region grouping of the non-OOI regions. The matting of the OOI is also performed in two stages. Initially the in focus foreground or region-of-interest (ROI) is separated from the out of focus background. This is achieved through the correlation of edge, gradient and higher-order statistics (HOS) saliencies. Refinement of the ROI is performed using k-means segmentation and CIEDE2000 colour-difference matching. Subsequently the OOI is extracted from within the ROI through analysis of the dominant gradients and edge saliencies together with k-means segmentation. Depth is assigned to each of the six labels by correlating Gestalt-based principles with vanishing point estimation, gradient plane approximation and depth from defocus (DfD). To minimise some of the dis-occlusions that are generated through the 3D warping sub-process within the DIBR process the depth map is pre-smoothed using an asymmetric bilateral filter. Hole-filling of the remaining dis-occlusions is performed through nearest-neighbour horizontal interpolation, which incorporates depth as well as direction of warp. To minimising the effects of the lateral striations, specific directional Gaussian and circular averaging smoothing is applied independently to each view, with additional average filtering applied to the border transitions. Each stage of the proposed model is benchmarked against data from several significant publications. Novel contributions are made in the sub-speciality fields of ROI estimation, OOI matting, LDOF image classification, Gestalt-based region categorisation, vanishing point detection, relative depth assignment and hole-filling or inpainting. An important contribution is made towards the overall knowledge base of automatic 2D-to-3D conversion techniques, through the collation of existing information, expansion of existing methods and development of newer concepts.
APA, Harvard, Vancouver, ISO, and other styles
38

ARAÚJO, Caio Fernandes. "Segmentação de imagens 3D utilizando combinação de imagens 2D." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/21040.

Full text
Abstract:
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-08-30T18:18:41Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertacao Caio Fernandes Araujo Versão Biblioteca.pdf: 4719896 bytes, checksum: 223db1c4382e6f970dc2cd659978ab60 (MD5)
Made available in DSpace on 2017-08-30T18:18:42Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertacao Caio Fernandes Araujo Versão Biblioteca.pdf: 4719896 bytes, checksum: 223db1c4382e6f970dc2cd659978ab60 (MD5) Previous issue date: 2016-08-12
CAPES
Segmentar imagens de maneira automática é um grande desafio. Apesar do ser humano conseguir fazer essa distinção, em muitos casos, para um computador essa divisão pode não ser tão trivial. Vários aspectos têm de ser levados em consideração, que podem incluir cor, posição, vizinhanças, textura, entre outros. Esse desafio aumenta quando se passa a utilizar imagens médicas, como as ressonâncias magnéticas, pois essas, além de possuírem diferentes formatos dos órgãos em diferentes pessoas, possuem áreas em que a variação da intensidade dos pixels se mostra bastante sutil entre os vizinhos, o que dificulta a segmentação automática. Além disso, a variação citada não permite que haja um formato pré-definido em vários casos, pois as diferenças internas nos corpos dos pacientes, especialmente os que possuem alguma patologia, podem ser grandes demais para que se haja uma generalização. Mas justamente por esse possuírem esses problemas, são os principais focos dos profissionais que analisam as imagens médicas. Este trabalho visa, portanto, contribuir para a melhoria da segmentação dessas imagens médicas. Para isso, utiliza a ideia do Bagging de gerar diferentes imagens 2D para segmentar a partir de uma única imagem 3D, e conceitos de combinação de classificadores para uni-las, para assim conseguir resultados estatisticamente melhores, se comparados aos métodos populares de segmentação. Para se verificar a eficácia do método proposto, a segmentação das imagens foi feita utilizando quatro técnicas de segmentação diferentes, e seus resultados combinados. As técnicas escolhidas foram: binarização pelo método de Otsu, o K-Means, rede neural SOM e o modelo estatístico GMM. As imagens utilizadas nos experimentos foram imagens reais, de ressonâncias magnéticas do cérebro, e o intuito do trabalho foi segmentar a matéria cinza do cérebro. As imagens foram todas em 3D, e as segmentações foram feitas em fatias 2D da imagem original, que antes passa por uma fase de pré-processamento, onde há a extração do cérebro do crânio. Os resultados obtidos mostram que o método proposto se mostrou bem sucedido, uma vez que, em todas as técnicas utilizadas, houve uma melhoria na taxa de acerto da segmentação, comprovada através do teste estatístico T-Teste. Assim, o trabalho mostra que utilizar os princípios de combinação de classificadores em segmentações de imagens médicas pode apresentar resultados melhores.
Automatic image segmentation is still a great challenge today. Despite the human being able to make this distinction, in most of the cases easily and quickly, to a computer this task may not be that trivial. Several characteristics have to be taken into account by the computer, which may include color, position, neighborhoods, texture, among others. This challenge increases greatly when it comes to using medical images, like the MRI, as these besides producing images of organs with different formats in different people, have regions where the intensity variation of pixels is subtle between neighboring pixels, which complicates even more the automatic segmentation. Furthermore, the above mentioned variation does not allow a pre-defined format in various cases, because the internal differences between patients bodies, especially those with a pathology, may be too large to make a generalization. But specially for having this kind of problem, those people are the main targets of the professionals that analyze medical images. This work, therefore, tries to contribute to the segmentation of medical images. For this, it uses the idea of Bagging to generate different 2D images from a single 3D image, and combination of classifiers to unite them, to achieve statistically significant better results, if compared to popular segmentation methods. To verify the effectiveness of the proposed method, the segmentation of the images is performed using four different segmentation techniques, and their combined results. The chosen techniques are the binarization by the Otsu method, K-Means, the neural network SOM and the statistical model GMM. The images used in the experiments were real MRI of the brain, and the dissertation objective is to segment the gray matter (GM) of the brain. The images are all in 3D, and the segmentations are made using 2D slices of the original image that pass through a preprocessing stage before, where the brain is extracted from the skull. The results show that the proposed method is successful, since, in all the applied techniques, there is an improvement in the accuracy rate, proved by the statistical test T-Test. Thus, the work shows that using the principles of combination of classifiers in medical image segmentation can obtain better results.
APA, Harvard, Vancouver, ISO, and other styles
39

Boui, Marouane. "Détection et suivi de personnes par vision omnidirectionnelle : approche 2D et 3D." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE009/document.

Full text
Abstract:
Dans cette thèse, nous traiterons du problème de la détection et du suivi 3D de personnes dans des séquences d'images omnidirectionnelles, dans le but de réaliser des applications permettant l'estimation de pose 3D. Ceci nécessite, la mise en place d'un suivi stable et précis de la personne dans un environnement réel. Dans le cadre de cette étude, on utilisera une caméra catadioptrique composée d'un miroir sphérique et d'une caméra perspective. Ce type de capteur est couramment utilisé dans la vision par ordinateur et la robotique. Son principal avantage est son large champ de vision qui lui permet d'acquérir une vue à 360 degrés de la scène avec un seul capteur et en une seule image. Cependant, ce capteur va engendrer des distorsions importantes dans les images, ne permettant pas une application directe des méthodes classiquement utilisées en vision perspective. Cette thèse traite de deux approches de suivi développées durant cette thèse, qui permettent de tenir compte de ces distorsions. Elles illustrent le cheminement suivi par nos travaux, nous permettant de passer de la détection de personne à l'estimation 3D de sa pose. La première étape de nos travaux a consisté à mettre en place un algorithme de détection de personnes dans les images omnidirectionnelles. Nous avons proposé d'étendre l'approche conventionnelle pour la détection humaine en image perspective, basée sur l'Histogramme Orientés du Gradient (HOG), pour l'adapter à des images sphériques. Notre approche utilise les variétés riemanniennes afin d'adapter le calcul du gradient dans le cas des images omnidirectionnelles. Elle utilise aussi le gradient sphérique pour le cas les images sphériques afin de générer notre descripteur d'image omnidirectionnelle. Par la suite, nous nous sommes concentrés sur la mise en place d'un système de suivi 3D de personnes avec des caméras omnidirectionnelles. Nous avons fait le choix de faire du suivi 3D basé sur un modèle de la personne avec 30 degrés de liberté car nous nous sommes imposés comme contrainte l'utilisation d'une seule caméra catadioptrique
In this thesis we will handle the problem of 3D people detection and tracking in omnidirectional images sequences, in order to realize applications allowing3D pose estimation, we investigate the problem of 3D people detection and tracking in omnidirectional images sequences. This requires a stable and accurate monitoring of the person in a real environment. In order to achieve this, we will use a catadioptric camera composed of a spherical mirror and a perspective camera. This type of sensor is commonly used in computer vision and robotics. Its main advantage is its wide field of vision, which allows it to acquire a 360-degree view of the scene with a single sensor and in a single image. However, this kind of sensor generally generates significant distortions in the images, not allowing a direct application of the methods conventionally used in perspective vision. Our thesis contains a description of two monitoring approaches that take into account these distortions. These methods show the progress of our work during these three years, allowing us to move from person detection to the 3Destimation of its pose. The first step of this work consisted in setting up a person detection algorithm in the omnidirectional images. We proposed to extend the conventional approach for human detection in perspective image, based on the Gradient-Oriented Histogram (HOG), in order to adjust it to spherical images. Our approach uses the Riemannian varieties to adapt the gradient calculation for omnidirectional images as well as the spherical gradient for spherical images to generate our omnidirectional image descriptor
APA, Harvard, Vancouver, ISO, and other styles
40

Chu, Jiaqi. "Orbital angular momentum encoding/decoding of 2D images for scalable multiview colour displays." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274903.

Full text
Abstract:
Three-dimensional (3D) displays project 3D images that give 3D perceptions and mimic real-world objects. Among the rich varieties of 3D displays, multiview displays take advantage of light’s various degrees of freedom and provide some of the 3D perceptions by projecting 2D subsampling of a 3D object. More 2D subsampling is required to project images with smoother parallax and more realistic sensation. As an additional degree of freedom with theoretically unlimited state space, orbital angular momentum (OAM) modes may be an alternative to the conventional multiview approaches and potentially project more images. This research involves exploring the possibility of encoding/decoding off-axis points in 2D images with OAM modes, development of the optical system, and design and development of a multiview colour display architecture. The first part of the research is exploring encoding/decoding off-axis points with OAM modes. Conventionally OAM modes are used to encode/decode the on-axis information only. Analysis of on-axis OAM beams referenced to off-axis points suggests representation of off-axis displacements as a set of expanded OAM components. At current stage off-axis points within an effective coding area are possible to be encoded/decoded with chosen OAM modes for multiplexing. Experimentally a 2D image is encoded/decoded with an OAM modes. When the encoding/decoding OAM modes match, the image is reconstructed. On the other hand, a dark region with zero intensity is shown. The dark region suggests the effective coding area for multiplexing. The final part of the research develops a multiview colour display. Based on understandings of off-axis representation of a set of different OAM components and experimental test of the optical system, three 1 mm monochromatic images are encoded, multiplexed and projected. Having studied wavelength effects on OAM coding, the initial architecture is updated to a scalable colour display consisting of four wavelengths.
APA, Harvard, Vancouver, ISO, and other styles
41

Baudour, Alexis. "Détection de filaments dans des images 2D et 3D : modélisation, étude mathématique et algorithmes." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00507520.

Full text
Abstract:
Cette thèse aborde le problème de la modélisation et de la détection des filaments dans des images 3D. Nous avons développé des méthodes variationnelles pour quatre applications spécifiques : l'extraction de routes où nous avons introduit la notion de courbure totale pour conserver les réseaux réguliers en tolérant les discontinuités de direction ; la détection et la complétion de filaments fortement bruités et présentant des occultations. Nous avons utilisé la magnétostatique et la théorie de Ginzburg-Landau pour représenter les filaments comme ensemble de singularités d'un champ vectoriel ; la détection de filaments dans des images biologiques acquises en microscopie confocale. On modélise les filaments en tenant compte des spécificités de cette dernière. Les filaments sont alors obtenus par une méthode de maximum à posteriori ; la détection de cibles dans des séquences d'images infrarouges. Dans cette application, on cherche des trajectoires optimisant la différence de luminosité moyenne entre la trajectoire et son voisinage en tenant compte des capteurs utilisés. Par ailleurs, nous avons démontré des résultats théoriques portant sur la courbure totale et la convergence de la méthode d'Alouges associée aux systèmes de Ginzburg-Landau. Ce travail réunit à la fois modélisation, résultats théoriques et recherche d'algorithmes numériques performants permettant de traiter de réelles applications.
APA, Harvard, Vancouver, ISO, and other styles
42

BEIL, FRANK MICHAEL. "Approche structurelle de l'analyse de la texture dans les images cellulaires 2d et 3d." Paris 7, 1999. http://www.theses.fr/1999PA077019.

Full text
Abstract:
L'analyse d'images microscopiques par ordinateur est une methode puissante pour etudier la structure des cellules. La definition d'un modele approprie a la comprehension de l'information contenue dans une image est un prerequis pour que l'analyse d'image automatisee soit fiable. La notion de texture est utilisee pour decrire des phenomenes non figuratifs d'images. Les modeles structurels de texture considerent la texture comme des ensembles d'elements de base agences en motifs specifiques. Partant de cette approche, nous avons elabore un modele dual de texture pour l'analyse d'architectures cellulaires. Ce modele permet de caracteriser des figures correspondant aussi bien a des lignes qu'a des regions. Nous avons developpe des algorithmes de segmentations de regions et de motifs de lignes. Une base de donnees, refletant la dualite du modele, permet l'analyse de ces deux types de texture avec les memes algorithmes. Les methodes developpees dans ce travail ont ete appliquees a l'analyse de la structure du cytosquelette et a l'analyse de la texture de la chromatine nucleaire dans des images bidimensionnelles (2d). Notre approche a prouve son efficacite dans la caracterisation des reseaux de filaments intermediaires pendant le developpement foetal du foie de rat. L'apparence de la chromatine est determinee par la distribution tridimensionnelle (3d) de l'adn. L'analyse de ce processus avec des methodes 2d ne nous a pas semble convenir a la classification des lesions du col de l'uterus en diagnostic pathologique. Nous avons etendu notre approche 2d a l'analyse de texture 3d, mises en evidence en microscopie confocale a balayage laser. Dans une etude pilote, les parametres de texture 3d ont permis de diagnostiquer de maniere tres precise des lesions de la prostate.
APA, Harvard, Vancouver, ISO, and other styles
43

MEZERREG, MOHAMED. "Structures de donnees graphiques : contribution a la conception d'un s.g.b.d. images 2d et 3d." Paris 7, 1990. http://www.theses.fr/1990PA077155.

Full text
Abstract:
Cette these aborde le probleme de la structuration des donnees relatives aux images 2d et 3d, et leur transformation sous forme d'un schema relationnel. Apres un apercu sur les problemes poses par les systemes graphiques en matiere de structures de donnees, deux modeles de description d'images sont presentes: le modele syntaxique et le modele bases de donnees. Pour representer les images 2d, la structure de donnees graphique quadkey est proposee. Outre un gain tres considerable en espace memoire et en temps de calcul, le quadkey presente l'avantage majeur de transformer les donnees images sous forme de relations adaptees au modele de bases de donnees relationnel. Le quadkey est ensuite etendu en octkey pour representer les images 3d. En utilisant l'octkey, deux methodes de reconstitution d'objets 3d a partir des projections 2d sont decrites. La premiere methode, reconstitue l'objet 3d par intersection des trois faces de vision d'objet: x-y, y-z, et z-x. La seconde, le reconstitue par fusion de ses coupes en serie. Enfin, un systeme de gestion de bases de donnees images (s. G. B. D. I. ) est presente. Base sur le modele relationnel, cet s. G. B. D. I. Utilise les structures de donnees graphiques quadkey et octkey. Il est dote d'un langage de requete non procedural et d'une bibliotheque graphique
APA, Harvard, Vancouver, ISO, and other styles
44

Stebbing, Richard. "Model-based segmentation methods for analysis of 2D and 3D ultrasound images and sequences." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f0e855ca-5ed9-4e40-994c-9b470d5594bf.

Full text
Abstract:
This thesis describes extensions to 2D and 3D model-based segmentation algorithms for the analysis of ultrasound images and sequences. Starting from a common 2D+t "track-to-last" algorithm, it is shown that the typical method of searching for boundary candidates perpendicular to the model contour is unnecessary if, for each boundary candidate, its corresponding position on the model contour is optimised jointly with the model contour geometry. With this observation, two 2D+t segmentation algorithms, which accurately recover boundary displacements and are capable of segmenting arbitrarily long sequences, are formulated and validated. Generalising to 3D, subdivision surfaces are shown to be natural choices for continuous model surfaces, and the algorithms necessary for joint optimisation of the correspondences and model surface geometry are described. Three applications of 3D model-based segmentation for ultrasound image analysis are subsequently presented and assessed: skull segmentation for fetal brain image analysis; face segmentation for shape analysis, and single-frame left ventricle (LV) segmentation from echocardiography images for volume measurement. A framework to perform model-based segmentation of multiple 3D sequences - while jointly optimising an underlying linear basis shape model - is subsequently presented for the challenging application of right ventricle (RV) segmentation from 3D+t echocardiography sequences. Finally, an algorithm to automatically select boundary candidates independent of a model surface estimate is described and presented for the task of LV segmentation. Although motivated by challenges in ultrasound image analysis, the conceptual contributions of this thesis are general and applicable to model-based segmentation problems in many domains. Moreover, the components are modular, enabling straightforward construction of application-specific formulations for new clinical problems as they arise in the future.
APA, Harvard, Vancouver, ISO, and other styles
45

Kang, Xin, and 康欣. "Feature-based 2D-3D registration and 3D reconstruction from a limited number of images via statistical inference for image-guidedinterventions." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48079625.

Full text
Abstract:
Traditional open interventions have been progressively replaced with minimally invasive techniques. Most notably, direct visual feedback is transitioned into indirect, image-based feedback, leading to the wide use of image-guided interventions (IGIs). One essential process of all IGIs is to align some 3D data with 2D images of patient through a procedure called 3D-2D registration during interventions to provide better guidance and richer information. When the 3D data is unavailable, a realistic 3D patient-speci_c model needs to be constructed from a few 2D images. The dominating methods that use only image intensity have narrow convergence range and are not robust to foreign objects presented in 2D images but not existed in 3D data. Feature-based methods partly addressed these problems, but most of them heavily rely on a set of \best" paired correspondences and requires clean image features. Moreover, the optimization procedures used in both kinds of methods are not e_cient. In this dissertation, two topics have been studied and novel algorithms proposed, namely, contour extraction from X-ray images and feature-based rigid/deformable 3D-2D registration. Inspired by biological and neuropsychological characteristics of primary visual cortex (V1), a contour detector is proposed for simultaneously extracting edges and lines in images. The synergy of V1 neurons is mimicked using phase congruency and tensor voting. Evaluations and comparisons showed that the proposed method outperformed several commonly used methods and the results are consistent with human perception. Moreover, the cumbersome \_ne-tuning" of parameter values is not always necessary in the proposed method. An extensible feature-based 3D-2D registration framework is proposed by rigorously formulating the registration as a probability density estimation problem and solving it via a generalized expectation maximization algorithm. It optimizes the transformation directly and treats correspondences as nuisance parameters. This is signi_cantly di_erent from almost all feature-based method in the literature that _rst single out a set of \best" correspondences and then estimate a transformation associated with it. This property makes the proposed algorithm not rely on paired correspondences and thus inherently robust to outliers. The framework can be adapted as a point-based method with the major advantages of 1) independency on paired correspondences, 2) accurate registration using a single image, and 3) robustness to the initialization and a large amount of outliers. Extended to a contour-based method, it di_ers from other contour-based methods mainly in that 1) it does not rely on correspondences and 2) it incorporates gradient information via a statistical model instead of a weighting function. Tuning into model-based deformable registration and surface reconstruction, our method solves the problem using the maximum penalized likelihood estimation. Unlike almost all other methods that handle the registration and deformation separately and optimized them sequentially, our method optimizes them simultaneously. The framework was evaluated in two example clinical applications and a simulation study for point-based, contour-based and surface reconstruction, respectively. Experiments showed its sub-degree and sub-millimeter registration accuracy and superiority to the state-of-the-art methods. It is expected that our algorithms, when thoroughly validated, can be used as valuable tools for image-guided interventions.
published_or_final_version
Orthopaedics and Traumatology
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
46

Lohou, Christophe. "Contribution à l'analyse topologique des images : étude d'algorithmes de squelettisation pour images 2D et 3D selon une approche topologie digitale ou topologie discrète." Marne-la-Vallée, 2001. http://www.theses.fr/2001MARN0120.

Full text
Abstract:
Cette thèse propose de nouveaux algorithmes de squelettisation d'images 2D et 3D selon deux approches : l'approche topologie digitale et l'approche topologie discrète. Dans la première partie, nous rappelons les notions fondamentales de topologie digitale et quelques algorithmes de squelettisation parmi les plus connus, basés sur la suppression de certains points simples. Puis, nous proposons une méthodologie permettant la production d'algorithmes de squelettisation fondés sur la suppression en parallèle de points P-simples. De tels algorithmes sont élaborés de façon à ce qu'ils suppriment au moins les points retirés par un autre algorithme donné. Nous appliquons cette méthodologie et produisons deux nouveaux algorithmes. Bien que les résultats semblent satisfaisants, la recherche et la mise en œuvre de tels algorithmes restent difficiles. Dans la deuxième partie, nous utilisons le cadre mathématique des ordres. De faç on plus directe qu'auparavant, nous proposons un algorithme de squelettisation consistant en la répétition de la suppression parallèle de points (n-simples, puis en la suppression parallèle de points (n-simples. Nous avons également proposé des définitions originales de points terminaux, nous permettant l'obtention de squelettes curvilignes ou surfaciques. Le schéma général de squelettisation est utilisé sur des images 2D, 3D binaires et 2D en niveaux de gris. Enfin, une étude de filtrage parallèle de squelettes est développée
This thesis proposes new thinning algorithms for 2D or 3D images according to two approaches using either the digital topology or the discrete topology. In the first part, we recall some fundamental notions of digital topology and several thinning algorithms amongs the well-known ones, which delete simple points. Then, we propose a methodology to produce new thinning algorithms based on the parallel deletion of P-simple points. Such algorithms are conceived in order to they delete at least the points removed by another one given existent thinning algorithm. By applying this methodology, we produce two new algorithms. Although results seem to be satisfying, the proposal and encoding of new proposed algorithms are not easy. In the second part, we use the concept of partially order set (or poset). We propose more straightforwardly than before, a thinning algorithm consisting in the repetition of parallel deletion of αn-simple points, followed by the parallel deletion of βn-simple points. We also have proposed new definitions of end points which permit us to obtain either curve skeletons or surface skeletons. The thinning scheme is used on 2D, 3D binary images, or on 2D grayscale images. At last, a study of a parallel filtering of skeletons is developped
APA, Harvard, Vancouver, ISO, and other styles
47

Pitocchi, Jonathan. "Quantitative assessment of bone quality after total hip replacement through medical images: 2D and 3D approaches." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13023/.

Full text
Abstract:
La valutazione della qualità dell’osso è un passo importante nel campo clinico, sia in condizioni patologiche che non patologiche. In particolare, in pazienti sottoposti ad intervento di sostituzione totale d’anca, la presenza della protesi altera le condizioni di stress fisiologiche del femore, causando un processo di adattamento dell’osso che può avere conseguenze sulla stabilità dell’impianto e sulle condizioni dell’osso. Perciò, è evidente l’importanza di monitorare la qualità dell’osso che circonda lo stelo, sia nel breve che nel lungo termine. A partire da rilevazioni densitometriche effettuate su immagini TC, questa tesi affronta due argomenti principali: 1) miglioramento del protocollo corrente per la valutazione della variazione di densità minerale ossea (tridimensionalmente attorno alla protesi) dopo un anno dall’operazione e applicazione ad un ampio dataset (11 pazienti) per esaminare le differenze tra soggetti; 2) studio di fattibilità su 8 pazienti di nuovi approcci 2D basati su sezioni standard del femore, con lo scopo di ottenere uno strumento per la valutazione della qualità dell’osso che possa essere affidabile per i clinici, ma anche minimamente invasivo per i pazienti, riducendo la dose di radiazione. I risultati dell’approccio 3D suggeriscono che può essere usato come strumento di monitoraggio post-operatorio del paziente. Inoltre la tesi mostra la fattibilità di nuovi approcci 2D, sebbene alcuni limiti devono essere superati prima di poterli utilizzare in ambito clinico.
APA, Harvard, Vancouver, ISO, and other styles
48

Cheng, Jie-Zhi, and 鄭介誌. "Cell-Based Image Segmentation for 2D and 2D Series Ultrasound Images." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/74833813995255310003.

Full text
Abstract:
碩士
國立臺灣大學
醫學工程學研究所
95
Boundary information of the object of interest in sonography is the fundamental basis for many clinical studies. It can help to manifest the abnormality of anatomy by characterizing the morphological features and plays the essential role in numerous quantitative ultrasound image analyses. For instance, the evaluation of functional properties of heart demands the quantification of the deformation of epi- and endo-cardiac surfaces. To draw a convincing conclusion for the quantitative analysis, the boundary information should be reliable and efficiently generated― which means robust image segmentation techniques are necessary. This study addresses the challenging segmentation problem of ultrasound images into two parts: 2D and 2D series. Theses two parts are attacked by the two proposed algorithms, i.e. ACCOMP and C2RC-MAP algorithms, respectively. The unique feature of the proposed algorithms is the concept of cell-based. The cell is the catchment basin tessellated by two-pass watershed transformation and is served as the basic operational unit in the two proposed algorithms. Taking the cell tessellation as the basis can be beneficial in three main points. First, comparing to directly finding solutions on pixels, searching on cells is more efficient. It is because the search space spanned by cells is dramatically smaller than the space of pixels. Therefore redundant computation could be saved. Second, the concrete region and edge information can be obtained in the cell tessellation. The concrete information in regions and edges could be valuable clues to assist the segmentation task. Third, as the cell is the group of pixels with homogeneous intensity, it might be more robust to noise in statistics— which could potentially improve the task of image process in ultrasound images. With these three advantages, cell-based image segmentation approaches might be more efficacious and efficient than pixel-based approaches. The ACCOMP algorithm is a two-phase data-driven approach, which is constituted by the partition and the edge grouping phases. The partition phase is purposed to tessellate the image or ROI with prominent components and is carries out by the cell competition algorithm. For the second phase, it is realized by the cell-based graph-traversing algorithm. Focusing on the edge information, the complicated echogenicity problem can be bypassed. The ACCOMP algorithm is validated on 300 breast sonograms, including 165 carcinomas and 135 benign cysts. The results show that more than 70% of the derived boundaries fall within the span of the manually outlines under 95% confident interval. The robustness of reproducibility is confirmed by the Friedman test, the p-values of which is 0.54. It has also suggested that the lesions sizes derived by the ACCOMP algorithm are highly correlated with the lesions defined by the average manually delineated boundaries. To ensure the delineated boundaries of a series of 2D images closely following the visually perceivable edges with high boundary coherence between consecutive slices, the C2RC-MAP algorithm is proposed. It deforms the region boundary in a cell-by-cell fashion through a cell-based two-region competition process. The cell-based deformation is guided by a cell-based MAP framework with a posterior function characterizing the distribution of the cell means in each region, the salience and shape complexity of the region boundary and the boundary coherence of the consecutive slices. The C2RC-MAP algorithm has been validated using 10 series of breast sonograms, including 7 compression series and 3 freehand series. The compression series contains 2 carcinoma and 5 fibroadenoma cases and the freehand series 2 carcinoma and 1 fibroadenoma cases. The results show that more than 70% of the derived boundaries fall within the span of the manually delineated boundaries. The robustness of the proposed algorithm to the variation of ROI is confirmed by the Friedman tests, the p-values of which are 0.517 and 0.352 for the compression and freehand series groups, respectively. The Pearson’s correlations between the lesion sizes derived by the proposed algorithm and those defined by the average manually delineated boundaries are all higher than 0.990. The overlapping and difference ratios between the derived boundaries and the average manually delineated boundaries are mostly higher than 0.90 and lower than 0.13, respectively. For both series groups, all assessments conclude that the boundaries derived by the proposed algorithm be comparable to those delineated manually. Moreover, it is shown that the proposed algorithm is superior to the Chan and Vese level set method based on the paired-sample t-tests on the performance indices at 5% significance level.
APA, Harvard, Vancouver, ISO, and other styles
49

Cheng, Jie-Zhi. "Cell-Based Image Segmentation for 2D and 2D Series Ultrasound Images." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1107200714111100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yung-Chih, Hsu, and 徐永智. "Reconstruct 2D Magnetic Resonance Images to 3D Images." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/00306249532420443901.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
89
Due to recent advances in Magnetic Resonance (MR), fetal images could be acquired by using fast MR imaging sequences, such as T2-weighted fast spin echo (SE). Preliminary results on fetal MR imaging in three-dimension directly, which facilitates thin-slice acquisition, have been demonstrated with scan time on the order of 29 sec. However, the current scan time can be detrimental to image quality in terms of both maternal respiratory motion and fetal motion. Here, we propose to develop a mathematically based method, pseudo-3D imaging, that could potentially be applied to fetal MR imaging. The main idea of pseudo-3D is that we use three sets of orthogonal 2D thick-slice images and by proper mathematically calculation the 3D volume data can be reconstructed with improving resolution. The post-processing of eliminating block effect is used to improve the quality of reconstructed images and makes images more readable. The results from both the mathematical phantom and experimental studies show that the proposed algorithm is theoretically feasible in the absence of image mis-registration. Therefore, in situations where true 3D acquisition is hampered by factors such as scan time rendering multi-slice 2D acquisition to be the only possible approach, pseudo-3D reconstruction using three orthogonal 2D slices seems to be a good alternative to achieve MR imaging with a pseudo isotropic spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography