Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Medical Images Processing.

Dissertationen zum Thema „Medical Images Processing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Medical Images Processing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Tummala, Sai Virali, und Veerendra Marni. „Comparison of Image Compression and Enhancement Techniques for Image Quality in Medical Images“. Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15360.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Matalas, Ioannis. „Segmentation techniques suitable for medical images“. Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339149.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ford, Ralph M. (Ralph Michael) 1965. „Computer-aided analysis of medical infrared images“. Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276986.

Der volle Inhalt der Quelle
Annotation:
Thermography is a useful tool for analyzing spinal nerve root irritation, but interpretation of digital infrared images is often qualitative and subjective. A new quantitative, computer-aided method for analyzing thermograms, utilizing the human dermatome map, is presented. Image processing and pattern recognition principles needed to accomplish this goal are discussed. Algorithms for segmentation, boundary detection and interpretation of thermograms are presented. An interactive, user-friendly program to perform this analysis has been developed. Due to the relatively large number of images in an exam, speed and simplicity were emphasized in algorithm development. The results obtained correlate well with clinical data and show promise for aiding the diagnosis of spinal nerve root irritation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Young, N. G. „The digital processing of astronomical and medical coded aperture images“. Thesis, University of Southampton, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.482729.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chabane, Yahia. „Semantic and flexible query processing of medical images using ontologies“. Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22784/document.

Der volle Inhalt der Quelle
Annotation:
L’interrogation efficace d’images en utilisant un système de recherche d’image est un problème qui a attiré l’attention de la communauté de recherche depuis une longue période. Dans le domaine médical, les images sont de plus en plus produites en grandes quantités en raison de leur intérêt croissant pour de nombreuses pratiques médicales comme le diagnostic, la rédaction de rapports et l’enseignement. Cette thèse propose un système d’annotation et recherche sémantique d’images gastroentérologiques basé sur une nouvelle ontologie des polypes qui peut être utilisée pour aider les médecins à décider comment traiter un polype. La solution proposée utilise une ontologie de polype et se base sur une adaptation des raisonnements standard des logiques de description pour permettre une construction semi-automatique de requêtes et d’annotation d’images. Une deuxième contribution de ce travail consiste dans la proposition d’une nouvelle approche pour le calcul de réponses relaxées des requêtes ontologiques basée sur une notion de distance entre un individu donné et une requête donnée. Cette distance est calculée en comptant le nombre d’opérations élémentaires à appliquer à une ABox afin de rendre un individu donné x, une réponse correcte à une requête. Ces opérations élémentaires sont l’ajout à ou la suppression d’une ABox, d’assertions sur des concepts atomiques (ou leur négation) et/ou des rôles atomiques. La thèse propose plusieurs sémantiques formelles pour la relaxation de requêtes et étudie les problèmes de décision et d’optimisation sous-jacents
Querying efficiently images using an image retrieval system is a long standing and challenging research problem.In the medical domain, images are increasingly produced in large quantities due their increasing interests for many medical practices such as diagnosis, report writing and teaching. This thesis proposes a semantic-based gastroenterological images annotation and retrieval system based on a new polyp ontology that can be used to support physicians to decide how to deal with a polyp. The proposed solution uses a polyp ontology and rests on an adaptation of standard reasonings in description logic to enable semi automatic construction of queries and image annotation.A second contribution of this work lies in the proposition of a new approach for computing relaxed answers of ontological queries based on a notion of an edit distance of a given individual w.r.t. a given query. Such a distance is computed by counting the number of elementary operations needed to be applied to an ABox in order to make a given individual a correct answer to a given query. The considered elementary operations are adding to or removing from an ABox, assertions on atomic concept, a negation of an atomic concept or an atomic role. The thesis proposes several formal semantics for such query approximation and investigates the underlying decision and optimisation problems
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Agrafiotis, Dimitris. „Three dimensional coding and visualisation of volumetric medical images“. Thesis, University of Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271864.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhao, Guang, und 趙光. „Automatic boundary extraction in medical images based on constrained edge merging“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223904.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Morton, A. S. „A knowledge-based approach to the interpretation of medical ultrasound images“. Thesis, University of Brighton, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.254407.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Cabrera, Gil Blanca. „Deep Learning Based Deformable Image Registration of Pelvic Images“. Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279155.

Der volle Inhalt der Quelle
Annotation:
Deformable image registration is usually performed manually by clinicians,which is time-consuming and costly, or using optimization-based algorithms, which are not always optimal for registering images of different modalities. In this work, a deep learning-based method for MR-CT deformable image registration is presented. In the first place, a neural network is optimized to register CT pelvic image pairs. Later, the model is trained on MR-CT image pairs to register CT images to match its MR counterpart. To solve the unavailability of ground truth data problem, two approaches were used. For the CT-CT case, perfectly aligned image pairs were the starting point of our model, and random deformations were generated to create a ground truth deformation field. For the multi-modal case, synthetic CT images were generated from T2-weighted MR using a CycleGAN model, plus synthetic deformations were applied to the MR images to generate ground truth deformation fields. The synthetic deformations were created by combining a coarse and fine deformation grid, obtaining a field with deformations of different scales. Several models were trained on images of different resolutions. Their performance was benchmarked with an analytic algorithm used in an actual registration workflow. The CT-CT models were tested using image pairs created by applying synthetic deformation fields. The MR-CT models were tested using two types of test images. The first one contained synthetic CT images and MR ones deformed by synthetically generated deformation fields. The second test set contained real MR-CT image pairs. The test performance was measured using the Dice coefficient. The CT-CT models obtained Dice scores higherthan 0.82 even for the models trained on lower resolution images. Despite the fact that all MR-CT models experienced a drop in their performance, the biggest decrease came from the analytic method used as a reference, both for synthetic and real test data. This means that the deep learning models outperformed the state-of-the-art analytic benchmark method. Even though the obtained Dice scores would need further improvement to be used in a clinical setting, the results show great potential for using deep learning-based methods for multi- and mono-modal deformable image registration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Madaris, Aaron T. „Characterization of Peripheral Lung Lesions by Statistical Image Processing of Endobronchial Ultrasound Images“. Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1485517151147533.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Williams, Glenda Patricia. „Development and clinical application of techniques for the image processing and registration of serially acquired medical images“. Thesis, University of South Wales, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326718.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Zhao, Guang. „Automatic boundary extraction in medical images based on constrained edge merging“. Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22030207.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Björn, Martin. „Laterality Classification of X-Ray Images : Using Deep Learning“. Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178409.

Der volle Inhalt der Quelle
Annotation:
When radiologists examine X-rays, it is crucial that they are aware of the laterality of the examined body part. The laterality refers to which side of the body that is considered, e.g. Left and Right. The consequences of a mistake based on information regarding the incorrect laterality could be disastrous. This thesis aims to address this problem by providing a deep neural network model that classifies X-rays based on their laterality. X-ray images contain markers that are used to indicate the laterality of the image. In this thesis, both a classification model and a detection model have been trained to detect these markers and to identify the laterality. The models have been trained and evaluated on four body parts: knees, feet, hands and shoulders. The images can be divided into three laterality classes: Bilateral, Left and Right. The model proposed in this thesis is a combination of two classification models: one for distinguishing between Bilateral and Unilateral images, and one for classifying Unilateral images as Left or Right. The latter utilizes the confidence of the predictions to categorize some of them as less accurate (Uncertain), which includes images where the marker is not visible or very hard to identify. The model was able to correctly distinguish Bilateral from Unilateral with an accuracy of 100.0 %. For the Unilateral images, 5.00 % were categorized as Uncertain and for the remaining images, 99.99 % of those were classified correctly as Left or Right.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Quartararo, John David. „Semi-Automated Segmentation of 3D Medical Ultrasound Images“. Digital WPI, 2009. https://digitalcommons.wpi.edu/etd-theses/155.

Der volle Inhalt der Quelle
Annotation:
A level set-based segmentation procedure has been implemented to identify target object boundaries from 3D medical ultrasound images. Several test images (simulated, scanned phantoms, clinical) were subjected to various preprocessing methods and segmented. Two metrics of segmentation accuracy were used to compare the segmentation results to ground truth models and determine which preprocessing methods resulted in the best segmentations. It was found that by using an anisotropic diffusion filtering method to reduce speckle type noise with a 3D active contour segmentation routine using the level set method resulted in semi-automated segmentation on par with medical doctors hand-outlining the same images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Martínez, Escobar Marisol. „An interactive color pre-processing method to improve tumor segmentation in digital medical images“. [Ames, Iowa : Iowa State University], 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Koller, Daniela. „Processing of Optical Coherence Tomography Images : Filtering and Segmentation of Pathological Thyroid Tissue“. Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161988.

Der volle Inhalt der Quelle
Annotation:
In the human body, the main function of the healthy thyroid gland is the regulation of the metabolism and hormone production. Included in the thyroid are organized structured and uniformly shaped follicles ranging from 50-500 μm in diameter. Pathologies lead to morphological changes of these follicles, affecting the density and size, but can also lead to an absence. In this study optical coherence tomography (OCT) was used to examine pathological thyroid tissue by extracting structural information of the follicles from image segmentation. However, OCT images usually include a high amount of speckle noise which affects the segmentation outcome. Due to that, the OCT images need to be improved. The aim of this thesis was to investigate the appropriate filtering methods to enhance the images and thus improve the segmentation outcome. The images of pathological thyroid tissues with a size of 0:5-1 cm where scanned by a spectral domain OCT system (Telesto II, Thorlabs GmbH, Germany) using a center wavelength of 1300nm. The obtained 2D and 3D images were saved as .oct file as well as implemented and visualized in a MATLAB graphical user interface (GUI) for further processing. For image improvement, four filtering enhancement methods were applied to the 2D images such as the enhanced resolution imaging (ERI), adaptive Wiener filter, discrete wavelet transform (DWT) and multi-frame wavelet transform (WT). The processed images were further converted to grayscale and binary images for intensity-based segmentation. The output of all methods were compared and evaluated using signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), enhanced number of looks (ENL), edge profiles and outcome of the segmented images. It was demonstrated that the complex DWT (cDWT) with a higher threshold and the multi-frame WT using the haar wavelet showed enhanced results over the other filtering methods. The computed SNR could be increased up to 52% and the ENL value up to 4802%, applying the multi-frame WT, while the CNR could be increased up to 106% for cDWT. The lowest obtained gradient was equal to an intensity decrease of -61% and -68% for multi-frame WT and cDWT, respectively. The filtering method could increase the smoothness of the image while the edge sharpness could be kept. The segmentation could detect both small and large follicles. ERI did not show any improvement in the segmentation but could enhance the structural detail of the image. Larger neighbourhoods of the adaptive Wiener filter showed a highly blurred image and led to merged follicles in the image segmentation. The wavelet filters DWT and multi-frame WT gave most satisfying results since high and low frequencies were divided into subbands, where individual information on vertical, horizontal and diagonal edges was stored. Applied cDWT had an even higher amount of subbands, so that more information on signal and speckle noise could be specified. Due to this fact, it was possible to achieve a decreased noise level while edge sharpness where maintained. Using a multi-frame image an increased SNR was obtained, as the intensity information stayed constant over the individual frames while the noise information changed. Wavelet based filtering showed higher improved results in comparison to the adaptive Wiener filter or the ERI in the 2D domain. By applying filtering methods in higher dimensions such as 3D or even 4D, better results in noise reduction are expected. Improved settings for the individual filtering methods as well as enhancement in segmentation are part of the future work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Gao, Zhiyun. „Novel multi-scale topo-morphologic approaches to pulmonary medical image processing“. Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/805.

Der volle Inhalt der Quelle
Annotation:
The overall aim of my PhD research work is to design, develop, and evaluate a new practical environment to generate separated representations of arterial and venous trees in non-contrast pulmonary CT imaging of human subjects and to extract quantitative measures at different tree-levels. Artery/vein (A/V) separation is of substantial importance contributing to our understanding of pulmonary structure and function, and immediate clinical applications exist, e.g., for assessment of pulmonary emboli. Separated A/V trees may also significantly boost performance of airway segmentation methods for higher tree generations. Although, non-contrast pulmonary CT imaging successfully captures higher tree generations of vasculature, A/V are indistinguishable by their intensity values, and often, there is no trace of intensity variation at locations of fused arteries and veins. Patient-specific structural abnormalities of vascular trees further complicate the task. We developed a novel multi-scale topo-morphologic opening algorithm to separate A/V trees in non-contrast CT images. The algorithm combines fuzzy distance transform, a morphologic feature, with a topologic connectivity and a new morphological reconstruction step to iteratively open multi-scale fusions starting at large scales and progressing towards smaller scales. The algorithm has been successfully applied on fuzzy vessel segmentation results using interactive seed selection via an efficient graphical user interface developed as a part of my PhD project. Accuracy, reproducibility and efficiency of the system are quantitatively evaluated using computer-generated and physical phantoms along with in vivo animal and human data sets and the experimental results formed are quite encouraging. Also, we developed an arc-skeleton based volumetric tree generation algorithm to generate multi-level volumetric tree representation of isolated arterial/venous tree and to extract vascular measurements at different tree levels. The method has been applied on several computer generated phantoms and CT images of pulmonary vessel cast and in vivo pulmonary CT images of a pig at different airway pressure. Experimental results have shown that the method is quite accurate and reproducible. Finally, we developed a new pulmonary vessel segmentation algorithm, i.e., a new anisotropic constrained region growing method that encourages axial region growing while arresting cross-structure leaking. The region growing is locally controlled by tensor scale and structure scale and anisotropy. The method has been successfully applied on several non-contrast pulmonary CT images of human subjects. The accuracy of the new method has been evaluated using manually selection of vascular and non-vascular voxels and the results found are very promising.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Manousakas, Ioannis. „A comparative study of segmentation algorithms applied to 2- and 3- dimensional medical images“. Thesis, University of Aberdeen, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360340.

Der volle Inhalt der Quelle
Annotation:
A method that enables discriminating between CSF-grey matter edges and grey-white matter edges separately has been suggested. It was obvious that edges from this method are more complete that those resolved by the original method and have fewer artifacts. Some edges that were undetected before, are now detected because they do not have any influence from stronger nearby edges. Texture noise is also suppressed and this allows us to work at higher space scales. These 3D edge detection methods proved to be superior to the equivalent 2D methods because they can calculate the gradient more accurately and the edges detected have better continuity which is uniformly preserved along all three directions. The split and merge technique was the second method that has been examined. The existing algorithms need data structures that have dimensions that are powers of two (quadtrees). Such a 3D method would not a practical for volume analysis because of memory limitations. For example, a 256x256x256 array of bytes is about 17Mbytes and since the method requires about 14 bytes per voxel, memory sizes that computers usually have are exceeded. In order to solve this problem an algorithm that applies a split and merge technique on non cubic datasets has been developed. Along the x,y axes, the data must have dimensions that are powers of 2 but along the z axis it is possible have any dimension that may meet the current memory limits. The method consist of three main steps a) splitting of an initial cutset, b) merging and c) grouping of the resulting nodes. An extra boundary elimination step is necessary to reduce the number of the resulting regions. The original method is controlled mainly by a parameter ε that is kept constant during the process. Currently, improvements that could be achieved introducing a level of optimisation during the grouping step are being examined. Here, the grouping is done in a way that stimulates the formation of a crystal during anealing by a progressive increase (relaxing) of the parameter ε. Such method has given different results from a method that consist of a split and merge step with ε = ε1 and a step of grouping with constant ε = ε1 and a step of grouping with constant ε = ε2 > ε1. At the moment, it has been difficult to establish quantitative ways of measuring any level of improving since there is no objective segmentation to compare with. So far, the method has processed adequately up to a block of 32 56c256 sized slice and can produce 3D objects representing regions like the ventricles, the white or the grey matter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Usta, Fatma. „Image Processing Methods for Myocardial Scar Analysis from 3D Late-Gadolinium Enhanced Cardiac Magnetic Resonance Images“. Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37920.

Der volle Inhalt der Quelle
Annotation:
Myocardial scar, a non-viable tissue which occurs on the myocardium due to the insufficient blood supply to the heart muscle, is one of the leading causes of life-threatening heart disorders, including arrhythmias. Analysis of myocardial scar is important for predicting the risk of arrhythmia and locations of re-entrant circuits in patients’ hearts. For applications, such as computational modeling of cardiac electrophysiology aimed at stratifying patient risk for post-infarction arrhythmias, reconstruction of the intact geometry of scar is required. Currently, 2D multi-slice late gadolinium-enhanced magnetic resonance imaging (LGEMRI) is widely used to detect and quantify myocardial scar regions of the heart. However, due to the anisotropic spatial dimensions in 2D LGE-MR images, creating scar geometry from these images results in substantial reconstruction errors. For applications requiring reconstructing the intact geometry of scar surfaces, 3D LGE-MR images are more suited as they are isotropic in voxel dimensions and have a higher resolution. While many techniques have been reported for segmentation of scar using 2D LGEMR images, the equivalent studies for 3D LGE-MRI are limited. Most of these 2D and 3D techniques are basic intensity threshold-based methods. However, due to the lack of optimum threshold (Th) value, these intensity threshold-based methods are not robust in dealing with complex scar segmentation problems. In this study, we propose an algorithm for segmentation of myocardial scar from 3D LGE-MR images based on Markov random field based continuous max-flow (CMF) method. We utilize the segmented myocardium as the region of interest for our algorithm. We evaluated our CMF method for accuracy by comparing its results to manual delineations using 3D LGE-MR images of 34 patients. We also compared the results of the CMF technique to ones by conventional full-width-at-half-maximum (FWHM) and signal-threshold-to-reference-mean (STRM) methods. The CMF method yields a Dice similarity coefficient (DSC) of 71 +- 8.7% and an absolute volume error (|VE|) of 7.56 +- 7 cm3. Overall, the CMF method outperformed the conventional methods for almost all reported metrics in scar segmentation. We present a comparison study for scar geometries obtained from 2D vs 3D LGE-MRI. As the myocardial scar geometry greatly influences the sensitivity of risk prediction in patients, we compare and understand the differences in reconstructed geometry of scar generated using 2D versus 3D LGE-MR images beside providing a scar segmentation study. We use a retrospectively acquired dataset of 24 patients with a myocardial scar who underwent both 2D and 3D LGE-MR imaging. We use manually segmented scar volumes from 2D and 3D LGE-MRI. We then reconstruct the 2D scar segmentation boundaries to 3D surfaces using a LogOdds-based interpolation method. We use numerous metrics to quantify and analyze the scar geometry including fractal dimensions, the number-of-connected-components, and mean volume difference. The higher 3D fractal dimension results indicate that the 3D LGE-MRI produces a more complex surface geometry by better capturing the sparse nature of the scar. Finally, 3D LGE-MRI produces a larger scar surface volume (27.49 +- 20.38 cm3) than 2D-reconstructed LGE-MRI (25.07 +- 16.54 cm3).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Moore, C. J. „Mathematical analysis and picture encoding methods applied to large stores of archived digital images“. Thesis, University of Manchester, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234220.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Gonzalez, Ana Guadalupe Salazar. „Structure analysis and lesion detection from retinal fundus images“. Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/6456.

Der volle Inhalt der Quelle
Annotation:
Ocular pathology is one of the main health problems worldwide. The number of people with retinopathy symptoms has increased considerably in recent years. Early adequate treatment has demonstrated to be effective to avoid the loss of the vision. The analysis of fundus images is a non intrusive option for periodical retinal screening. Different models designed for the analysis of retinal images are based on supervised methods, which require of hand labelled images and processing time as part of the training stage. On the other hand most of the methods have been designed under the basis of specific characteristics of the retinal images (e.g. field of view, resolution). This compromises its performance to a reduce group of retinal image with similar features. For these reasons an unsupervised model for the analysis of retinal image is required, a model that can work without human supervision or interaction. And that is able to perform on retinal images with different characteristics. In this research, we have worked on the development of this type of model. The system locates the eye structures (e.g. optic disc and blood vessels) as first step. Later, these structures are masked out from the retinal image in order to create a clear field to perform the lesion detection. We have selected the Graph Cut technique as a base to design the retinal structures segmentation methods. This selection allows incorporating prior knowledge to constraint the searching for the optimal segmentation. Different link weight assignments were formulated in order to attend the specific needs of the retinal structures (e.g. shape). This research project has put to work together the fields of image processing and ophthalmology to create a novel system that contribute significantly to the state of the art in medical image analysis. This new knowledge provides a new alternative to address the analysis of medical images and opens a new panorama for researchers exploring this research area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

ZHAO, HANG. „Segmentation and synthesis of pelvic region CT images via neural networks trained on XCAT phantom data“. Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178209.

Der volle Inhalt der Quelle
Annotation:
Deep learning methods for medical image segmentation are hindered by the lack of training data. This thesis aims to develop a method that overcomes this problem. Basic U-net trained on XCAT phantom data was tested first. The segmentation results were unsatisfactory even when artificial quantum noise was added. As a workaround, CycleGAN was used to add tissue textures to the XCAT phantom images by analyzing patient CT images. The generated images were used totrain the network. The textures introduced by CycleGAN improved the segmentation, but some errors remained. Basic U-net was replaced with Attention U-net, which further improved the segmentation. More work is needed to fine-tune and thoroughly evaluate the method. The results obtained so far demonstrate the potential of this method for the segmentation of medical images. The proposed algorithms may be used in iterative image reconstruction algorithms in multi-energy computed tomography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Abutalib, Feras Wasef. „A methodology for applying three dimensional constrained Delaunay tetrahedralization algorithms on MRI medical images /“. Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112551.

Der volle Inhalt der Quelle
Annotation:
This thesis addresses the problem of producing three-dimensional constrained Delaunay triangulated meshes from the sequential two dimensional MRI medical image slices. The approach is to generate the volumetric meshes of the scanned organs as a result of a several low-level tasks: image segmentation, connected component extraction, isosurfacing, image smoothing, mesh decimation and constrained Delaunay tetrahedralization. The proposed methodology produces a portable application that can be easily adapted and extended by researchers to tackle this problem. The application requires very minimal user intervention and can be used either independently or as a pre-processor to an adaptive mesh refinement system.
Finite element analysis of the MRI medical data depends heavily on the quality of the mesh representation of the scanned organs. This thesis presents experimental test results that illustrate how the different operations done during the process can affect the quality of the final mesh.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Kéchichian, Razmig. „Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms“. Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00967381.

Der volle Inhalt der Quelle
Annotation:
We develop a generic Graph Cut-based semiautomatic multiobject image segmentation method principally for use in routine medical applications ranging from tasks involving few objects in 2D images to fairly complex near whole-body 3D image segmentation. The flexible formulation of the method allows its straightforward adaption to a given application.\linebreak In particular, the graph-based vicinity prior model we propose, defined as shortest-path pairwise constraints on the object adjacency graph, can be easily reformulated to account for the spatial relationships between objects in a given problem instance. The segmentation algorithm can be tailored to the runtime requirements of the application and the online storage capacities of the computing platform by an efficient and controllable Voronoi tessellation clustering of the input image which achieves a good balance between cluster compactness and boundary adherence criteria. Qualitative and quantitative comprehensive evaluation and comparison with the standard Potts model confirm that the vicinity prior model brings significant improvements in the correct segmentation of distinct objects of identical intensity, the accurate placement of object boundaries and the robustness of segmentation with respect to clustering resolution. Comparative evaluation of the clustering method with competing ones confirms its benefits in terms of runtime and quality of produced partitions. Importantly, compared to voxel segmentation, the clustering step improves both overall runtime and memory footprint of the segmentation process up to an order of magnitude virtually without compromising the segmentation quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Lu, Yi Cheng. „Classifying Liver Fibrosis Stage Using Gadoxetic Acid-Enhanced MR Images“. Thesis, Linköpings universitet, Institutionen för medicin och hälsa, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162989.

Der volle Inhalt der Quelle
Annotation:
The purpose is trying to classify the Liver Fibrosis stage using Gadoxetic Acid-EnhancedMR Images.  In the very beginning, a method proposed by one Korean group is being examined and trying to reproduce their result. However, the performance is not as impressive as theirs. Then, some gray-scale image feature extraction methods are used. Last but not least, the hottest method in recent years - ConvolutionNeural Network(CNN) was utilized. Finally, the performance has been evaluated in both methods. The result shows that with manual feature extraction, the Adaboost model works pretty well that AUC achieves 0.9. Besides, the AUC of ResNet-18 network - a deep learning architecture, can reach 0.93. Also, all the hyperparameters and training settings used on ResNet-18 can be transferred to ResNet-50/ResNet-101/InceptionV3 very well. The best model that can be obtained is ResNet-101which has an AUC of 0.96 - higher than all current publications for machine learning methods for staging liver fibrosis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

D'Souza, Aswin Cletus. „Automated counting of cell bodies using Nissl stained cross-sectional images“. [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2035.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Elbita, Abdulhakim M. „Efficient Processing of Corneal Confocal Microscopy Images. Development of a computer system for the pre-processing, feature extraction, classification, enhancement and registration of a sequence of corneal images“. Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6463.

Der volle Inhalt der Quelle
Annotation:
Corneal diseases are one of the major causes of visual impairment and blindness worldwide. Used for diagnoses, a laser confocal microscope provides a sequence of images, at incremental depths, of the various corneal layers and structures. From these, ophthalmologists can extract clinical information on the state of health of a patient’s cornea. However, many factors impede ophthalmologists in forming diagnoses starting with the large number and variable quality of the individual images (blurring, non-uniform illumination within images, variable illumination between images and noise), and there are also difficulties posed for automatic processing caused by eye movements in both lateral and axial directions during the scanning process. Aiding ophthalmologists working with long sequences of corneal image requires the development of new algorithms which enhance, correctly order and register the corneal images within a sequence. The novel algorithms devised for this purpose and presented in this thesis are divided into four main categories. The first is enhancement to reduce the problems within individual images. The second is automatic image classification to identify which part of the cornea each image belongs to, when they may not be in the correct sequence. The third is automatic reordering of the images to place the images in the right sequence. The fourth is automatic registration of the images with each other. A flexible application called CORNEASYS has been developed and implemented using MATLAB and the C language to provide and run all the algorithms and methods presented in this thesis. CORNEASYS offers users a collection of all the proposed approaches and algorithms in this thesis in one platform package. CORNEASYS also provides a facility to help the research team and Ophthalmologists, who are in discussions to determine future system requirements which meet clinicians’ needs.
The data and image files accompanying this thesis are not available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Elbita, Abdulhakim Mehemed. „Efficient processing of corneal confocal microscopy images : development of a computer system for the pre-processing, feature extraction, classification, enhancement and registration of a sequence of corneal images“. Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6463.

Der volle Inhalt der Quelle
Annotation:
Corneal diseases are one of the major causes of visual impairment and blindness worldwide. Used for diagnoses, a laser confocal microscope provides a sequence of images, at incremental depths, of the various corneal layers and structures. From these, ophthalmologists can extract clinical information on the state of health of a patient’s cornea. However, many factors impede ophthalmologists in forming diagnoses starting with the large number and variable quality of the individual images (blurring, non-uniform illumination within images, variable illumination between images and noise), and there are also difficulties posed for automatic processing caused by eye movements in both lateral and axial directions during the scanning process. Aiding ophthalmologists working with long sequences of corneal image requires the development of new algorithms which enhance, correctly order and register the corneal images within a sequence. The novel algorithms devised for this purpose and presented in this thesis are divided into four main categories. The first is enhancement to reduce the problems within individual images. The second is automatic image classification to identify which part of the cornea each image belongs to, when they may not be in the correct sequence. The third is automatic reordering of the images to place the images in the right sequence. The fourth is automatic registration of the images with each other. A flexible application called CORNEASYS has been developed and implemented using MATLAB and the C language to provide and run all the algorithms and methods presented in this thesis. CORNEASYS offers users a collection of all the proposed approaches and algorithms in this thesis in one platform package. CORNEASYS also provides a facility to help the research team and Ophthalmologists, who are in discussions to determine future system requirements which meet clinicians’ needs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Oscanoa1, Julio, Marcelo Mena und Guillermo Kemper. „A Detection Method of Ectocervical Cell Nuclei for Pap test Images, Based on Adaptive Thresholds and Local Derivatives“. Science and Engineering Research Support Society, 2015. http://hdl.handle.net/10757/624843.

Der volle Inhalt der Quelle
Annotation:
Cervical cancer is one of the main causes of death by disease worldwide. In Peru, it holds the first place in frequency and represents 8% of deaths caused by sickness. To detect the disease in the early stages, one of the most used screening tests is the cervix Papanicolaou test. Currently, digital images are increasingly being used to improve Pap test efficiency. This work develops an algorithm based on adaptive thresholds, which will be used in Pap smear assisted quality control software. The first stage of the method is a pre-processing step, in which noise and background removal is done. Next, a block is segmented for each one of the points selected as not background, and a local threshold per block is calculated to search for cell nuclei. If a nucleus is detected, an artifact rejection follows, where only cell nuclei and inflammatory cells are left for the doctors to interpret. The method was validated with a set of 55 images containing 2317 cells. The algorithm successfully recognized 92.3% of the total nuclei in all images collected.
Revisón por pares
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Sidiropoulos, Konstantinos. „Pattern recognition systems design on parallel GPU architectures for breast lesions characterisation employing multimodality images“. Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/9190.

Der volle Inhalt der Quelle
Annotation:
The aim of this research was to address the computational complexity in designing multimodality Computer-Aided Diagnosis (CAD) systems for characterising breast lesions, by harnessing the general purpose computational potential of consumer-level Graphics Processing Units (GPUs) through parallel programming methods. The complexity in designing such systems lies on the increased dimensionality of the problem, due to the multiple imaging modalities involved, on the inherent complexity of optimal design methods for securing high precision, and on assessing the performance of the design prior to deployment in a clinical environment, employing unbiased system evaluation methods. For the purposes of this research, a Pattern Recognition (PR)-system was designed to provide highest possible precision by programming in parallel the multiprocessors of the NVIDIA’s GPU-cards, GeForce 8800GT or 580GTX, and using the CUDA programming framework and C++. The PR-system was built around the Probabilistic Neural Network classifier and its performance was evaluated by a re-substitution method, for estimating the system’s highest accuracy, and by the external cross validation method, for assessing the PR-system’s unbiased accuracy to new, “unseen” by the system, data. Data comprised images of patients with histologically verified (benign or malignant) breast lesions, who underwent both ultrasound (US) and digital mammography (DM). Lesions were outlined on the images by an experienced radiologist, and textural features were calculated. Regarding breast lesion classification, the accuracies for discriminating malignant from benign lesions were, 85.5% using US-features alone, 82.3% employing DM-features alone, and 93.5% combining US and DM features. Mean accuracy to new “unseen” data for the combined US and DM features was 81%. Those classification accuracies were about 10% higher than accuracies achieved on a single CPU, using sequential programming methods, and 150-fold faster. In addition, benign lesions were found smoother, more homogeneous, and containing larger structures. Additionally, the PR-system design was adapted for tackling other medical problems, as a proof of its generalisation. These included classification of rare brain tumours, (achieving 78.6% for overall accuracy (OA) and 73.8% for estimated generalisation accuracy (GA), and accelerating system design 267 times), discrimination of patients with micro-ischemic and multiple sclerosis lesions (90.2% OA and 80% GA with 32-fold design acceleration), classification of normal and pathological knee cartilages (93.2% OA and 89% GA with 257-fold design acceleration), and separation of low from high grade laryngeal cancer cases (93.2% OA and 89% GA, with 130-fold design acceleration). The proposed PR-system improves breast-lesion discrimination accuracy, it may be redesigned on site when new verified data are incorporated in its depository, and it may serve as a second opinion tool in a clinical environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Pehrson, Skidén Ottar. „Automatic Exposure Correction And Local Contrast Setting For Diagnostic Viewing of Medical X-ray Images“. Thesis, Linköping University, Department of Biomedical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-56630.

Der volle Inhalt der Quelle
Annotation:

To properly display digital X-ray images for visual diagnosis, a proper display range needs to be identified. This can be difficult when the image contains collimators or large background areas which can dominate the histograms. Also, when there are both underexposed and overexposed areas in the image it is difficult to display these properly at the same time. The purpose of this thesis is to find a way to solve these problems. A few different approaches are evaluated to find their strengths and weaknesses. Based on Local Histogram Equalization, a new method is developed to put various constraints on the mapping. These include alternative ways to perform the histogram calculations and how to define the local histograms. The new method also includes collimator detection and background suppression to keep irrelevant parts of the image out of the calculations. Results show that the new method enables proper display of both underexposed and overexposed areas in the image simultaneously while maintaining the natural look of the image. More testing is required to find appropriate parameters for various image types.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Karlsson, Simon, und Per Welander. „Generative Adversarial Networks for Image-to-Image Translation on Street View and MR Images“. Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148475.

Der volle Inhalt der Quelle
Annotation:
Generative Adversarial Networks (GANs) is a deep learning method that has been developed for synthesizing data. One application for which it can be used for is image-to-image translations. This could prove to be valuable when training deep neural networks for image classification tasks. Two areas where deep learning methods are used are automotive vision systems and medical imaging. Automotive vision systems are expected to handle a broad range of scenarios which demand training data with a high diversity. The scenarios in the medical field are fewer but the problem is instead that it is difficult, time consuming and expensive to collect training data. This thesis evaluates different GAN models by comparing synthetic MR images produced by the models against ground truth images. A perceptual study is also performed by an expert in the field. It is shown by the study that the implemented GAN models can synthesize visually realistic MR images. It is also shown that models producing more visually realistic synthetic images not necessarily have better results in quantitative error measurements, when compared to ground truth data. Along with the investigations on medical images, the thesis explores the possibilities of generating synthetic street view images of different resolution, light and weather conditions. Different GAN models have been compared, implemented with our own adjustments, and evaluated. The results show that it is possible to create visually realistic images for different translations and image resolutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Ren, Jing. „From RF signals to B-mode Images Using Deep Learning“. Thesis, KTH, Medicinteknik och hälsosystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235061.

Der volle Inhalt der Quelle
Annotation:
Ultrasound imaging is a safe and popular imaging technique that relies on received radio frequency (RF) echos to show the internal organs and tissue. B-mode (Brightness mode) is the typical mode of ultrasound images generated from RF signals. In practice, the real processing algorithms from RF signals to B-mode images in ultrasound machines are kept confidential by the manufacturers. The thesis aims to estimate the process and reproduce the same results as the Ultrasonix One ultrasound machine does using deep learning. 11 scalar parameters including global gain, time-gain-compensation (TGC1-8), dynamic range and reject affect the transformation from RF signals to B-mode images in the machine. Data generation strategy was proposed. Two network architectures adapted from U-Net and Tiramisu Net were investigated and compared. Results show that a deep learning network is able to translate RF signals to B-mode images with respect to the controlling parameters. The best performance is achieved by adapted U-Net that reduces per pixel error to 1.325%. The trained model can be used to generate images for other experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Hrabovszki, Dávid. „Classification of brain tumors in weakly annotated histopathology images with deep learning“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177271.

Der volle Inhalt der Quelle
Annotation:
Brain and nervous system tumors were responsible for around 250,000 deaths in 2020 worldwide. Correctly identifying different tumors is very important, because treatment options largely depend on the diagnosis. This is an expert task, but recently machine learning, and especially deep learning models have shown huge potential in tumor classification problems, and can provide fast and reliable support for pathologists in the decision making process. This thesis investigates classification of two brain tumors, glioblastoma multiforme and lower grade glioma in high-resolution H&E-stained histology images using deep learning. The dataset is publicly available from TCGA, and 220 whole slide images were used in this study. Ground truth labels were only available on whole slide level, but due to their large size, they could not be processed by convolutional neural networks. Therefore, patches were extracted from the whole slide images in two sizes and fed into separate networks for training. Preprocessing steps ensured that irrelevant information about the background was excluded, and that the images were stain normalized. The patch-level predictions were then combined to slide level, and the classification performance was measured on a test set. Experiments were conducted about the usefulness of pre-trained CNN models and data augmentation techniques, and the best method was selected after statistical comparisons. Following the patch-level training, five slide aggregation approaches were studied, and compared to build a whole slide classifier model. Best performance was achieved when using small patches (336 x 336 pixels), pre-trained CNN model without frozen layers, and mirroring data augmentation. The majority voting slide aggregation method resulted in the best whole slide classifier with 91.7% test accuracy and 100% sensitivity. In many comparisons, however, statistical significance could not be shown because of the relatively small size of the test set.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Zaman, Shaikh Faisal. „Automated Liver Segmentation from MR-Images Using Neural Networks“. Thesis, Linköpings universitet, Avdelningen för radiologiska vetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162599.

Der volle Inhalt der Quelle
Annotation:
Liver segmentation is a cumbersome task when done manually, often consuming quality time of radiologists. Use of automation in such clinical task is fundamental and the subject of most modern research. Various computer aided methods have been incorporated for this task, but it has not given optimal results due to the various challenges faced as low-contrast in the images, abnormalities in the tissues, etc. As of present, there has been significant progress in machine learning and artificial intelligence (AI) in the field of medical image processing. Though challenges exist, like image sensitivity due to different scanners used to acquire images, difference in imaging methods used, just to name a few. The following research embodies a convolutional neural network (CNN) was incorporated for this process, specifically a U-net algorithm. Predicted masks are generated on the corresponding test data and the Dice similarity coefficient (DSC) is used as a statistical validation metric for performance evaluation. Three datasets, from different scanners (two1.5 T scanners and one 3.0 T scanner), have been evaluated. The U-net performs well on the given three different datasets, even though there was limited data for training, reaching upto DSC of 0.93 for one of the datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Marcuzzo, Mônica. „Quantificação de impressões diagnósticas em imagens de cintilografia renal“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/10344.

Der volle Inhalt der Quelle
Annotation:
A cintilografia renal é um exame amplamente utilizado para a avaliação visual do funcionamento do córtex renal. Ele permite visualizar a concentração do radiofármaco, o tamanho, a forma, a simetria e a posição dos rins. No entanto, a avaliação visual das impressões diagnósticas dessas imagens tende a ser um processo subjetivo. Isso faz com que ocorra uma significativa variabilidade entre as interpretações feitas por diferentes especialistas. Assim, este trabalho tem como objetivo propor medidas quantitativas que refletem impressões diagnósticas comumente observadas por especialistas nas imagens de cintilografia renal. São atribuídos valores numéricos a essas impressões, o que, potencialmente, reduz a subjetividade e a variabilidade da interpretação das descobertas. A fim de permitir a extração dessas medidas, um método de segmentação específico para essas imagens também é proposto. Os resultados indicam que as medidas propostas atingem níveis de concordância de no mínimo 90% dos casos quando comparadas com a avaliação visual de especialistas. Esses resultados sugerem que as medidas podem ser usadas para reduzir a subjetividade na avaliação das imagens, já que elas fornecem uma alternativa quantitativa e objetiva para reportar as impressões diagnosticas das imagens de cintilografia renal.
Renal scintigraphy is a well established functional technique for the visual evaluation of the renal cortical mass. It allows the visualization of the radiopharmaceutical tracer distribution, the size, the shape, the symmetry, and the position of the kidneys. However, the visual diagnostic impressions for these images tend to be a subjective process. It causes significant variability in the interpretation of findings. Thus, this work aims at proposing quantitative measures that reflect common diagnostic impressions for those images. These measures can potentially minimize the inter-observer variability. In order to make possible the extraction of these measures, a specific segmentation method is also proposed. The results indicate that our proposed features agree in at least 90% of the cases with the specialists visual evaluation. These results suggest that the features could be used to reduce the subjectivity in the evaluation of the images, since they provide a quantitative and objective alternative to report the diagnostic impressions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wenan, Chen. „Automated Measurement of Midline Shift in Brain CT Images and its Application in Computer-Aided Medical Decision Making“. VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/121.

Der volle Inhalt der Quelle
Annotation:
The severity of traumatic brain injury (TBI) is known to be characterized by the shift of the middle line in brain as the ventricular system often changes in size and position, depending on the location of the original injury. In this thesis, the focus is given to processing of the CT (Computer Tomography) brain images to automatically calculate midline shift in pathological cases and use it to predict Intracranial Pressure (ICP). The midline shift measurement can be divided into three steps. First the ideal midline of the brain, i.e., the midline before injury, is found via a hierarchical search based on skull symmetry and tissue features. Second, the ventricular system is segmented from the brain CT slices. Third, the actual midline is estimated from the deformed ventricles by shape matching method. The horizontal shift in the ventricles is then calculated based on the ideal midline and the actual midline in TBI CT images. The proposed method presents accurate detection of the ideal midline using anatomical features in the skull, accurate segmentation of ventricles for actual midline estimation using the information of anatomical features with a spatial template derived from a magnetic resonance imaging (MRI) scan, and an accurate estimation of the actual midline based on the robust proposed multiple regions shape matching algorithm. After the midline shift is successively measured, features including midline shift, texture information of CT images, as well as other demographic information are used to predict ICP. Machine learning algorithms are used to model the relation between the ICP and the extracted features. By using systematic feature selection and parameter selection of the learning model, promising results on ICP prediction are achieved. The prediction results also indicate the reliability of the proposed midline shift estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ognard, Julien. „Place et apport des outils pour l'automatisation du traitement des images médicales en pratique clinique“. Thesis, Brest, 2018. http://www.theses.fr/2018BRES0096.

Der volle Inhalt der Quelle
Annotation:
L'application du traitement de l'image et son automatisation dans le domaine de l'imagerie médicale montre l'évolution des tendances avec la disponibilité des technologies émergentes. Les procédés et outils de traitement de l’image médicale sont résumés, les différentes manières de travailler sur une image sont représentés pour expliquer une recherche expansive dans différents domaines, tandis que les applications disponibles sont discutées. Ces applications sont aussi illustrées par le biais d’outils du traitement de l’image développés pour des besoins spécifiques. La catégorisation de chaque travail est effectuée selon des paradigmes. Ces derniers sont définis selon le niveau de considération au niveau global (formation de l’image, amélioration, visualisation, analyse, gestion), au sein de l’image (scène, organe, région, texture, pixel), de l’outil (reconstruction, recalage, segmentation, morphologie mathématique), du processus d’automatisation et de son applicabilité (faisabilité, validation, reproductibilité, implémentation, optimisation) en clinique (prédiction, diagnostic, amélioration, aide à la décision), ou en recherche (niveau de preuve). Par ce biais, il est démontré le rôle de chaque outil pris en exemple dans la construction d’un processus d’automatisation qui est expliqué, et étendu du patient au compte rendu en passant par l’image. L’actualité de la recherche conjointe sur le traitement de l'image et le processus d'automatisation en imagerie médicale actuelle est débattue. Le rôle de la communauté des ingénieurs et radiologues dans et autour de ce processus d’automatisation est discuté
The application of image processing and its automation in the field of medical imaging shows the evolution of trends with the availability of emerging technologies. Medical image processing methods and tools are summarized, different ways of working on an image are represented to explain expansive search in different domains, while available applications are discussed. These applications are also illustrated through image processing tools developed for specific needs. The categorization of each work is done according to paradigms. These are defined according to the level of consideration at the global level (image formation, improvement, visualization, analysis, management), within the image (scene, organ, region, texture, pixel), of the tool (reconstruction, registration, segmentation, mathematical morphology), the automation process and its applicability (feasibility, validation, reproducibility, implementation, optimization) in clinic (prediction, diagnosis improvement, decision support), or in research (level of evidence). In this way, it is demonstrated the role of each tool taken as an example in the construction of an automation process that is explained, and extended from the patient to the report through the image. News from the joint research on image processing and the automation process in current medical imaging is debated.The role of the community of engineers and radiologists in and around this automation process is discussed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Thakkar, Chintan. „Ventricle slice detection in MRI images using Hough Transform and Object Matching techniques“. [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001815.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Clark, Matthew C. „Knowledge guided processing of magnetic resonance images of the brain [electronic resource] / by Matthew C. Clark“. University of South Florida, 2001. http://purl.fcla.edu/fcla/etd/SFE0000001.

Der volle Inhalt der Quelle
Annotation:
Includes vita.
Title from PDF of title page.
Document formatted into pages; contains 222 pages.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
ABSTRACT: This dissertation presents a knowledge-guided expert system that is capable of applying routinesfor multispectral analysis, (un)supervised clustering, and basic image processing to automatically detect and segment brain tissue abnormalities, and then label glioblastoma-multiforme brain tumors in magnetic resonance volumes of the human brain. The magnetic resonance images used here consist of three feature images (T1-weighted, proton density, T2-weighted) and the system is designed to be independent of a particular scanning protocol. Separate, but contiguous 2D slices in the transaxial plane form a brain volume. This allows complete tumor volumes to be measured and if repeat scans are taken over time, the system may be used to monitor tumor response to past treatments and aid in the planning of future treatment. Furthermore, once processing begins, the system is completely unsupervised, thus avoiding the problems of human variability found in supervised segmentation efforts.Each slice is initially segmented by an unsupervised fuzzy c-means algorithm. The segmented image, along with its respective cluster centers, is then analyzed by a rule-based expert system which iteratively locates tissues of interest based on the hierarchy of cluster centers in feature space. Model-based recognition techniques analyze tissues of interest by searching for expected characteristics and comparing those found with previously defined qualitative models. Normal/abnormal classification is performed through a default reasoning method: if a significant model deviation is found, the slice is considered abnormal. Otherwise, the slice is considered normal. Tumor segmentation in abnormal slices begins with multispectral histogram analysis and thresholding to separate suspected tumor from the rest of the intra-cranial region. The tumor is then refined with a variant of seed growing, followed by spatial component analysis and a final thresholding step to remove non-tumor pixels.The knowledge used in this system was extracted from general principles of magnetic resonance imaging, the distributions of individual voxels and cluster centers in feature space, and anatomical information. Knowledge is used both for single slice processing and information propagation between slices. A standard rule-based expert system shell (CLIPS) was modified to include the multispectral analysis, clustering, and image processing tools.A total of sixty-three volume data sets from eight patients and seventeen volunteers (four with and thirteen without gadolinium enhancement) were acquired from a single magnetic resonance imaging system with slightly varying scanning protocols were available for processing. All volumes were processed for normal/abnormal classification. Tumor segmentation was performed on the abnormal slices and the results were compared with a radiologist-labeled ground truth' tumor volume and tumor segmentations created by applying supervised k-nearest neighbors, a partially supervised variant of the fuzzy c-means clustering algorithm, and a commercially available seed growing package. The results of the developed automatic system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Ahmady, Phoulady Hady. „Adaptive Region-Based Approaches for Cellular Segmentation of Bright-Field Microscopy Images“. Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6794.

Der volle Inhalt der Quelle
Annotation:
Microscopy image processing is an emerging and quickly growing field in medical imaging research area. Recent advancements in technology including higher computation power, larger and cheaper storage modules, and more efficient and faster data acquisition devices such as whole-slide imaging scanners contributed to the recent microscopy image processing research advancement. Most of the methods in this research area either focus on automatically process images and make it easier for pathologists to direct their focus on the important regions in the image, or they aim to automate the whole job of experts including processing and classifying images or tissues that leads to disease diagnosis. This dissertation is consisted of four different frameworks to process microscopy images. All of them include methods for segmentation either as the whole suggested framework or the initial part of the framework for future feature extraction and classification. Specifically, the first proposed framework is a general segmentation method that works on histology images from different tissues and segments relatively solid nuclei in the image, and the next three frameworks work on cervical microscopy images, segmenting cervical nuclei/cells. Two of these frameworks focus on cervical tissue segmentation and classification using histology images and the last framework is a comprehensive segmentation framework that segments overlapping cervical cells in cervical cytology Pap smear images. One of the several commonalities among these frameworks is that they all work at the region level and use different region features to segment regions and later either expand, split or refine the segmented regions to produce the final segmentation output. Moreover, all proposed frameworks work relatively much faster than other methods on the same datasets. Finally, proving ground truth for datasets to be used in the training phase of microscopy image processing algorithms is relatively time-consuming, complicated and costly. Therefore, I designed the frameworks in such a way that they set most (if not all) of the parameters adaptively based on each image that is being processed at the time. All of the included frameworks either do not depend on training datasets at all (first three of the four discussed frameworks) or need very small training datasets to learn or set a few parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Hammadi, Shumoos T. H. „Novel medical imaging technologies for processing epithelium and endothelium layers in corneal confocal images. Developing automated segmentation and quantification algorithms for processing sub-basal epithelium nerves and endothelial cells for early diagnosis of diabetic neuropathy in corneal confocal microscope images“. Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/16924.

Der volle Inhalt der Quelle
Annotation:
Diabetic Peripheral Neuropathy (DPN) is one of the most common types of diabetes that can affect the cornea. An accurate analysis of the corneal epithelium nerve structures and the corneal endothelial cell can assist early diagnosis of this disease and other corneal diseases, which can lead to visual impairment and then to blindness. In this thesis, fully-automated segmentation and quantification algorithms for processing and analysing sub-basal epithelium nerves and endothelial cells are proposed for early diagnosis of diabetic neuropathy in Corneal Confocal Microscopy (CCM) images. Firstly, a fully automatic nerve segmentation system for corneal confocal microscope images is proposed. The performance of the proposed system is evaluated against manually traced images with an execution time of the prototype is 13 seconds. Secondly, an automatic corneal nerve registration system is proposed. The main aim of this system is to produce a new informative corneal image that contains structural and functional information. Thirdly, an automated real-time system, termed the Corneal Endothelium Analysis System (CEAS) is developed and applied for the segmentation of endothelial cells in images of human cornea obtained by In Vivo CCM. The performance of the proposed CEAS system was tested against manually traced images with an execution time of only 6 seconds per image. Finally, the results obtained from all the proposed approaches have been evaluated and validated by an expert advisory board from two institutes, they are the Division of Medicine, Weill Cornell Medicine-Qatar, Doha, Qatar and the Manchester Royal Eye Hospital, Centre for Endocrinology and Diabetes, UK.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Moya, Nikolas 1991. „Interactive segmentation of multiple 3D objects in medical images by optimum graph cuts = Segmentação interativa de múltiplos objetos 3D em imagens médicas por cortes ótimos em grafo“. [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275554.

Der volle Inhalt der Quelle
Annotation:
Orientador: Alexandre Xavier Falcão
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-27T14:45:13Z (GMT). No. of bitstreams: 1 Moya_Nikolas_M.pdf: 5706960 bytes, checksum: 9304544bfe8a78039de8b62562531865 (MD5) Previous issue date: 2015
Resumo: Segmentação de imagens médicas é crucial para extrair medidas de objetos 3D (estruturas anatômicas) que são úteis no diagnóstico e tratamento de doenças. Nestas aplicações, segmentação interativa é necessária quando métodos automáticos falham ou não são factíveis. Métodos por corte em grafo são considerados o estado da arte em segmentação interativa, mas diversas abordagens utilizam o algoritmo min-cut/max-flow, que é limitado à segmentação binária, sendo que segmentação de múltiplos objetos pode economizar tempo e esforço do usuário. Este trabalho revisita a transformada imagem floresta diferencial (DIFT, em inglês) -- uma abordagem por corte em grafo adequada para segmentação de múltiplos objetos -- resolvendo problemas relacionados a ela. O algoritmo da DIFT executa em tempo proporcional ao número de voxels nas regiões modificadas em cada execução da segmentação (sublinear). Tal característica é altamente desejável em segmentação interativa de imagens 3D para responder as ações do usuário em tempo real. O algoritmo da DIFT funciona da seguinte forma: o usuário desenha marcadores (traço com voxels de semente) rotulados dentro de cada objeto e o fundo, enquanto o computador interpreta a imagem como um grafo, cujos nós são os voxels e os arcos são definidos por pixels vizinhos, produzindo como resultado uma floresta de caminhos ótimos (partição na imagem) enraizada nos nós sementes do grafo. Nesta floresta, cada objeto é representado pela floresta de caminhos ótimos enraizado em suas sementes internas. Tais árvores são pintadas com a mesmo cor associada ao rótulo do marcador correspondente. Ao adicionar ou remover marcadores, é possível corrigir a segmentação até o mapa de rótulo de objeto representar o resultado desejado. Para garantir consistência na segmentação, métodos baseados em semente sempre devem manter a conectividade entre os voxels e suas sementes. Entretanto, isto não é mantido em algumas abordagens, como Random Walkers ou quando o mapa de rótulos é filtrado para suavizar a fronteira dos objetos. Esta conectividade é primordial para realizar correções sem recomeçar o processo depois de cada intervenção do usuário. Todavia, foi observado que a DIFT falha em manter consistência da segmentação em alguns casos. Consertamos este problema tanto no algoritmo da DIFT, quanto após a suavização dos objetos. Estes resultados comparam diversas estruturas anatômicas 3D de imagens de ressonância magnética e tomografia computadorizada
Abstract: Medical image segmentation is crucial to extract measures from 3D objects (body anatomical structures) that are useful for diagnosis and treatment of diseases. In such applications, interactive segmentation is necessary whenever automated methods fail or are not feasible. Graph-cut methods are considered the state of the art in interactive segmentation, but most approaches rely on the min-cut/max-flow algorithm, which is limited to binary segmentation while multi-object segmentation can considerably save user time and effort. This work revisits the differential image foresting transform (DIFT) ¿ a graph-cut approach suitable for multi-object segmentation in linear time ¿ and solves several problems related to it. Indeed, the DIFT algorithm can take time proportional to the number of voxels in the regions modified at each segmentation execution (sublinear time). Such a characteristic is highly desirable in 3D interactive segmentation to respond the user's actions as close as possible to real time. Segmentation using the DIFT works as follows: the user draws labeled markers (strokes of connected seed voxels) inside each object and background, while the computer interprets the image as a graph, whose nodes are the voxels and arcs are defined by neighboring voxels, and outputs an optimum-path forest (image partition) rooted at the seed nodes in the graph. In the forest, each object is represented by the optimum-path trees rooted at its internal seeds. Such trees are painted with same color associated to the label of the corresponding marker. By adding/removing markers, the user can correct segmentation until the forest (its object label map) represents the desired result. For the sake of consistency in segmentation, similar seed-based methods should always maintain the connectivity between voxels and seeds that have labeled them. However, this does not hold in some approaches, such as random walkers, or when the segmentation is filtered to smooth object boundaries. That connectivity is also paramount to make corrections without starting over the process at each user intervention. However, we observed that the DIFT algorithm fails in maintaining segmentation consistency in some cases. We have fixed this problem in the DIFT algorithm and when the obtained object boundaries are smoothed. These results are presented and evaluated on several 3D body anatomical structures from MR and CT images
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Dhinagar, Nikhil J. „Non-Invasive Skin Cancer Classification from Surface Scanned Lesion Images“. Ohio University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1366384987.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Jönsson, Marthina. „Automated methods in the diagnosing of retinal images“. Thesis, KTH, Systemsäkerhet och organisation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122721.

Der volle Inhalt der Quelle
Annotation:
This report contains a summation of a variety of articles that have been read and analysed. Each article describes different methods that can be used to detect lesions, optic disks, drusen and exudates in retinal images. I.e. diagnose e.g. Diabetic Retinopathy and Age-Related Macular Degeneration. A general approach is presented, which all methods more or less is based on. Methods to locate the optic disk The PCA  kNN Regression Hough Transform Fuzzy Convergence Vessel Direction Matched Filter Etc. The best method based on result, reliability, number of images and publisher is kNN regression. The result of this method is remarkably good and that brings some doubt about its reliability. Though the method was published at IEEE and that gives the method a more trustful look. A next best method which also is very useful is Vessel Direction Matched Filter. Methods to detect drusen – diagnose Age-Related Macular Degeneration PNN classifier Histogram approach Etc. The best method based on result, reliability, number of images and publisher is the PNN classifier. The method had a sensitivity of 94 % and a specificity of 95 %. 300 images were used in the experiment which was published by the IEEE in 2011. Methods to detect exudates – diagnose Diabetic Retinopathy Morphological techniques Luv colour space, Wiener filter an Canny edge detector. The best method based on result, reliability, number of images and publisher is an experiment called “Feature Extraction”. The method includes the Luv colour space, Wiener filter (remove noise) and the Canny edge detector.
Den här rapporten innehåller en sammanfattning av ett flertal artiklar som har blivit studerade. Varje artikel har beskrivit en metod som kan användas för att upptäcka sjuka förändringar i ögonbottenbilder, det vill säga, åldersförändringar i gula fläcken och diabetisk retinopati. Metoder för att lokalisera blinda fläcken PCA kNN regression Hough omvandling Suddig konvergens Filtrering beroende på kärlens riktning Mm. Den bästa metoden baserat på resultat, pålitlighet, antal bilder och utgivare är kNN regression. De förvånansvärt goda resultaten kan inbringa lite tvivel på huruvida resultaten stämmer. Artikeln publicerades dock av IEEE och det gör artikeln mer trovärdig. Den näst bästa metoden är filtrering beroende på kärlens riktning. Metoder för att diagnosticera åldersförändringar i gula fläcken PNN klassificeraren Histogram Mm. Den bästa metoden baserat på resultat, pålitlighet, antal bilder och utgivare är PNN klassificeraren. Metoden hade en sensitivitet på 94 % och en specificitet på 95 %. 300 bilder användes i experimentet som publicerades av IEEE år 2011. Metoder att diagnosticera diabetisk retinopati Morfologiska tekniker Luv colour space, Wiener filter and Canny edge detector. Den bästa metoden baserat på resultat, pålitlighet, antal bilder och utgivare är ett experimentet som heter ”Feature Extraction”. Experimentet inkluderar Luv colour space, Wiener filter (brus borttagning) och Canny edge detector
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Daba, Dieudonne Diba. „Quality Assurance of Intra-oral X-ray Images“. Thesis, Umeå universitet, Radiofysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-171001.

Der volle Inhalt der Quelle
Annotation:
Dental radiography is one of the most frequent types of diagnostic radiological investigations performed. The equipment and techniques used are constantly evolving. However, dental healthcare has long been an area neglected by radiation safety legislation and the medical physicist community, and thus, the quality assurance (QA) regime needs an update. This project aimed to implement and evaluate objective tests of key image quality parameters for intra-oral (IO) X-ray images. The image quality parameters assessed were sensitivity, noise, uniformity, low-contrast resolution, and spatial resolution. These parameters were evaluated for repeatability at typical tube current, voltage, and exposure time settings by computing the coefficient of variation (CV) of the mean value of each parameter from multiple images. A further aim was to develop a semi-quantitative test for the correct alignment of the position indicating device (PID) with the primary collimator. The overall purpose of this thesis was to look at ways to improve the QA of IO X-rays systems by digitizing and automating part of the process. A single image receptor and an X-ray tube were used in this study. Incident doses at the receptor were measured using a radiation meter. The relationship between incident dose at the receptor and the output signal was used to determine the signal transfer curve for the receptor. The principal sources of noise in the practical exposure range of the system were investigated using a separation of noise sources based upon variance. The transfer curve of the receptor was found to be linear. Noise separation showed that quantum noise was the dominant noise. Repeatability of the image quality parameters assessed was found to be acceptable. The CV for sensitivity was less than 3%, while that for noise was less than 1%. For the uniformity measured at the center, the CV was less than 10%, while the CV was less than 5% for the uniformity measured at the edge. The low-contrast resolution varied the most at all exposure settings investigated with CV between 6 - 13%. Finally, the CV for the spatial resolution parameters was less than 5%. The method described to test for the correct alignment of the PID with the primary collimator was found to be practical and easy to interpret manually. The tests described here were implemented for a specific sensor and X-ray tube combination, but the methods could easily be adapted for different systems by simply adjusting certain parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Rodrigues, Erbe Pandini. „Avaliação de métricas para o corregistro não rígido de imagens médicas“. Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/59/59135/tde-15062010-094159/.

Der volle Inhalt der Quelle
Annotation:
A medida de similaridade é parte fundamental no corregistro de imagens, guiando todo seu processo. Neste estudo foi feita a comparação entre diferentes métricas de similaridade no contexto do corregistro não rígido (ou elástico) de imagens médicas. Como as imagens cardíacas representam as mais desaadoras situações em corregistro de imagens médicas, foram utilizadas para teste imagens de ressonância magnética nuclear e imagens de ultrasom cardíaco com contraste. 10 métricas de similaridades diferentes foram comparadas extensivamente, quanto ao seu desempenho para o corregistro não rígido: a soma do quadrado das diferenças (SQD), correlação cruzada (CC), correlação cruzada normalizada (CCN), informação mútua (IM), entropia da diferença (ED), variância da diferença (VD), energia (EN), campo de gradiente normalizado (CGN), medida pontual de informação mútua (MPIM), medida pontual de entropia da diferença (MPED). As métricas baseadas em entropias de informação, IM, ED, foram generalizadas em termos da entropia de Tsallis e avaliadas em seu parâmetro q. Os resultados apresentados mostram a eciência das métricas estudadas para diferentes parâmetros, como dimensão da região de comparação entre as imagens, dimensão da região de busca por similaridade, número de tons de cinza das imagens e parâmetro entrópico. Estes achados podem ser úteis para a construção de denições apropriadas para o corregistro não-rígido, utilizado no corregistro de imagens médicas complexas.
The similarity measurement plays a key role in images registration, driving the whole process of registration. In this study a comparison was made between dierent metrics of similarity in the context of non-rigid registration in medical images. As cardiac images represent the most challenging situation in medical image registration, it has been used as test heart magnetic resonance imaging (MRI) and cardiac ultrasound contrast images. In this work ten different similarity metrics have been compared extensively, as well its performance for the non-rigid registration process: the sum of the squared differences (SQD), cross- correlation (CC), normalized cross correlation (CCN), mutual information (IM), the entropy difference (ED), variance of the difference (VD), energy (EN), eld of normalized gradient (CGN), point measure of mutual information (MPIM), point measure of entropy differences (MPED). Metrics based on information entropies, IM, ED were eneralized in terms of Tsallis entropy and evaluated in its parameter q. The presented results show the effectiveness of the studied metrics for different parameters such as similarity window search size, similarity region search size, image maximum gray level, and entropic parameter. These nding can be helpful to construct appropriate non-rigid registration settings for complex medical image registration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Cheng, Jian. „Estimation and Processing of Ensemble Average Propagator and Its Features in Diffusion MRI“. Phd thesis, Université Nice Sophia Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00759048.

Der volle Inhalt der Quelle
Annotation:
L'IRM de diffusion est a ce jour la seule technique a meme d'observer in vivo et de fac¸on non-invasive les structures fines de la mati'ere blanche, en modelisant la diffusion des molecules d'eau. Le propagateur moyen (EAP pour Ensemble average Propagator en anglais) et la fonction de distribution d'orientation (ODF pour Orientation Distribution Function en anglais) sont les deux fonctions de probabilites d'int'erˆet pour caracteriser la diffusion des molecules d'eau. Le probleme central en IRM de diffusion est la reconstruction et le traitement de ces fonctions (EAP et ODF); c'est aussi le point de depart pour la tractographie des fibres de la mati'ere blanche. Le formalisme du tenseur de diffusion (DTI pour Diffusion Tensor Imaging en anglais) est le modele le plus couramment utilise, et se base sur une hypothese de diffusion gaussienne. Il existe un cadre riemannien qui permet d'estimer et de traiter correctement les images de tenseur de diffusion. Cependant, l'hypothese d'une diffusion gaussienne est une simplification, qui ne permet pas de d'écrire les cas ou la structure microscopique sous-jacente est complexe, tels que les croisements de faisceaux de fibres. L'imagerie 'a haute resolution angulaire (HARDI pour High Angular Resolution Diffusion Imaging en anglais) est un ensemble de methodes qui permettent de contourner les limites du modele tensoriel. La plupart des m'ethodes HARDI 'a ce jour, telles que l'imagerie spherique de l'espace de Fourier (QBI pour Q-Ball Imaging en anglais) se basent sur des hypoth'eses reductrices, et prennent en compte des acquisitions qui ne se font que sur une seule sphere dans l'espace de Fourier (sHARDI pour single-shell HARDI en anglais), c'est-a-dire une seule valeur du coefficient de ponderation b. Cependant, avec le developpement des scanners IRM et des techniques d'acquisition, il devient plus facile d'acquerir des donn'ees sur plusieurs sph'eres concentriques. Cette th'ese porte sur les methodes d'estimation et de traitement de donnees sur plusieurs spheres (mHARDI pour multiple-shell HARDI en anglais), et de facon generale sur les methodes de reconstruction independantes du schema d'echantillonnage. Cette these presente plusieurs contributions originales. En premier lieu, nous developpons l'imagerie par transformee de Fourier en coordonnees spheriques (SPFI pour Spherical Polar Fourier Imaging en anglais), qui se base sur une representation du signal dans une base de fonctions a parties radiale et angulaire separables (SPF basis pour Spherical Polar Fourier en anglais). Nous obtenons, de fac¸on analytique et par transformations lineaires, l'EAP ainsi que ses caracteristiques importantes : l'ODF, et des indices scalaires tels que l'anisotropie fractionnelle generalisee (GFA pour Generalized Fractional Anisotropy en anglais). En ce qui concerne l'implementation de SPFI, nous presentons deux methodes pour determiner le facteur d'echelle, et nous prenons en compte le fait que E(0) = 1 dans l'estimation. En second lieu, nous presentons un nouveau cadre pour une transformee de Fourier analytique en coordonnees spheriques (AFT-SC pour Analytical Fourier Transform in Spherical Coordinate en anglais), ce qui permet de considerer aussi bien les methodes mHARDI que sHARDI, d'explorer les relations entre ces methodes, et de developper de nouvelles techniques d'estimation de l'EAP et de l'ODF. Nous presentons en troisieme lieu d'importants crit'eres de comparaison des differentes methodes HARDI, ce qui permet de mettre en lumiere leurs avantages et leurs limites. Dans une quatrieme partie, nous proposons un nouveau cadre riemannien invariant par diffeomorphisme pour le traitement de l'EAP et de l'ODF. Ce cadre est une generalisation de la m'ethode riemannienne precedemment appliquee au tenseur de diffusion. Il peut etre utilise pour l'estimation d'une fonction de probabilite representee par sa racine carree, appelee fonction d'onde, dans une base de fonctions orthonormale. Dans ce cadre riemannien, les applications exponentielle et logarithmique, ainsi que les geodesiques ont une forme analytique. La moyenne riemannienne ponderee ainsi que la mediane existent et sont uniques, et peuvent etre calculees de facon efficace par descente de gradient. Nous developpons egalement un cadre log-euclidien et un cadre affine-euclidien pour un traitement rapide des donnees. En cinquieme partie, nous comparons, theoriquement et sur un plan exp'erimental, les metriques euclidiennes et riemanniennes pour les tenseurs, l'ODF et l'EAP. Finalement, nous proposons l'anisotropie geodesique (GA pour Geodesic Anisotropy en anglais) pour mesurer l'anisotropie de l'EAP; une parametrisation par la racine carrée (SRPE pour Square-Root Parameterized Estimation en anglais) pour l'estimation d'un EAP et d'une ODF positifs; la mediane et la moyenne riemanniennes ponderees pour l'interpolation, le lissage et la construction d'atlas bas'es sur l'ODF et de l'EAP. Nous introduisons la notion de valeur moyenne raisonnable pour l'interpolation de fonction de probabilites en general.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Ångman, Mikael, und Hampus Viken. „Automatic segmentation of articular cartilage in arthroscopic images using deep neural networks and multifractal analysis“. Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166035.

Der volle Inhalt der Quelle
Annotation:
Osteoarthritis is a large problem affecting many patients globally, and diagnosis of osteoarthritis is often done using evidence from arthroscopic surgeries. Making a correct diagnosis is hard, and takes years of experience and training on thousands of images. Therefore, developing an automatic solution to perform the diagnosis would be extremely helpful to the medical field. Since machine learning has been proven to be useful and effective at classifying and segmenting medical images, this thesis aimed at solving the problem using machine learning methods. Multifractal analysis has also been used extensively for medical imaging segmentation. This study proposes two methods of automatic segmentation using neural networks and multifractal analysis. The thesis was performed using real arthroscopic images from surgeries. MultiResUnet architecture is shown to be well suited for pixel perfect segmentation. Classification of multifractal features using neural networks is also shown to perform well when compared to related studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Montagnat, Johan. „Segmentation d'image médicales volumiques à l'aide de maillages déformables contraints“. Phd thesis, École normale supérieure de Cachan - ENS Cachan, 1996. http://tel.archives-ouvertes.fr/tel-00691915.

Der volle Inhalt der Quelle
Annotation:
La segmentation d'organes abdominaux dans des images médicales volumiques est rendue difficile par le bruit et le faible contraste de ces images. Les techniques de segmentation classiques à base d'extraction de contours ou de seuillage donnent des résultats insuffisants. Dans ce rapport, nous utilisons des modèles déformables pour segmenter les images. En introduisant un modèle de l'organe voulu dans le processus de segmentation, nous bénéficions d'une connaissance a priori de la forme à retrouver. Nous utilisons des images de contour bruitées pour déformer localement le modèle. Les données de contours étant incomplètes, il est nécessaire de contraindre le modèle pour qu'il se déforme régulièrement. Nos maillages simplexes bénéficient d'un mécanisme de mémoire de forme agissant de façon régularisante sur les déformations. Nous utilisons des transformations globales pour disposer davantage de contraintes. Un modèle hybride fournit un compromis entre complexité de calcul des transformations globales et nombre de degrés de liberté du modèle. Nous étudions également l'utilisation d'un ensemble d'apprentissage pour construire un modèle plus robuste tirant parti de l'information statistique des déformations possibles. L'information statistique peut être utilisée pour contraindre d'avantage les déformations ou pour paramétrer de façon plus fine le processus de déformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie