Dissertations / Theses on the topic 'Images PET'

To see the other types of publications on this topic, follow the link: Images PET.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Images PET.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cruz, Cavalcanti Yanna. "Factor analysis of dynamic PET images." Thesis, Toulouse, INPT, 2018. http://www.theses.fr/2018INPT0078/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La tomographie par émission de positrons (TEP) est une technique d'imagerie nucléaire noninvasive qui permet de quantifier les fonctions métaboliques des organes à partir de la diffusion d'un radiotraceur injecté dans le corps. Alors que l'imagerie statique est souvent utilisée afin d'obtenir une distribution spatiale de la concentration du traceur, une meilleure évaluation de la cinétique du traceur est obtenue par des acquisitions dynamiques. En ce sens, la TEP dynamique a suscité un intérêt croissant au cours des dernières années, puisqu'elle fournit des informations à la fois spatiales et temporelles sur la structure des prélèvements de traceurs en biologie \textit{in vivo}. Les techniques de quantification les plus efficaces en TEP dynamique nécessitent souvent une estimation de courbes temps-activité (CTA) de référence représentant les tissus ou une fonction d'entrée caractérisant le flux sanguin. Dans ce contexte, de nombreuses méthodes ont été développées pour réaliser une extraction non-invasive de la cinétique globale d'un traceur, appelée génériquement analyse factorielle. L'analyse factorielle est une technique d'apprentissage non-supervisée populaire pour identifier un modèle ayant une signification physique à partir de données multivariées. Elle consiste à décrire chaque voxel de l'image comme une combinaison de signatures élémentaires, appelées \textit{facteurs}, fournissant non seulement une CTA globale pour chaque tissu, mais aussi un ensemble des coefficients reliant chaque voxel à chaque CTA tissulaire. Parallèlement, le démélange - une instance particulière d'analyse factorielle - est un outil largement utilisé dans la littérature de l'imagerie hyperspectrale. En imagerie TEP dynamique, elle peut être très pertinente pour l'extraction des CTA, puisqu'elle prend directement en compte à la fois la non-négativité des données et la somme-à-une des proportions de facteurs, qui peuvent être estimées à partir de la diffusion du sang dans le plasma et les tissus. Inspiré par la littérature de démélange hyperspectral, ce manuscrit s'attaque à deux inconvénients majeurs des techniques générales d'analyse factorielle appliquées en TEP dynamique. Le premier est l'hypothèse que la réponse de chaque tissu à la distribution du traceur est spatialement homogène. Même si cette hypothèse d'homogénéité a prouvé son efficacité dans plusieurs études d'analyse factorielle, elle ne fournit pas toujours une description suffisante des données sousjacentes, en particulier lorsque des anomalies sont présentes. Pour faire face à cette limitation, les modèles proposés ici permettent un degré de liberté supplémentaire aux facteurs liés à la liaison spécifique. Dans ce but, une perturbation spatialement variante est introduite en complément d'une CTA nominale et commune. Cette variation est indexée spatialement et contrainte avec un dictionnaire, qui est soit préalablement appris ou explicitement modélisé par des non-linéarités convolutives affectant les tissus de liaisons non-spécifiques. Le deuxième inconvénient est lié à la distribution du bruit dans les images PET. Même si le processus de désintégration des positrons peut être décrit par une distribution de Poisson, le bruit résiduel dans les images TEP reconstruites ne peut généralement pas être simplement modélisé par des lois de Poisson ou gaussiennes. Nous proposons donc de considérer une fonction de coût générique, appelée $\beta$-divergence, capable de généraliser les fonctions de coût conventionnelles telles que la distance euclidienne, les divergences de Kullback-Leibler et Itakura-Saito, correspondant respectivement à des distributions gaussiennes, de Poisson et Gamma. Cette fonction de coût est appliquée à trois modèles d'analyse factorielle afin d'évaluer son impact sur des images TEP dynamiques avec différentes caractéristiques de reconstruction
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics
2

Batty, Stephen. "Content based retrieval of PET neurological images." Thesis, Middlesex University, 2004. http://eprints.mdx.ac.uk/9770/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Medical image management has posed challenges to many researchers, especially when the images have to be indexed and retrieved using their visual content that is meaningful to clinicians. In this study, an image retrieval system has been developed for 3D brain PET (Position emission tomography) images. It has been found that PET neurological images can be retrieved based upon their diagnostic status using only data pertaining to their content, and predominantly the visual content. During the study PET scans are spatially normalized, using existing techniques, and their visual data is quantified. The mid-sagittal-plane of each individual 3D PET scan is found and then utilized in the detection of abnormal asymmetries, such as tumours or physical injuries. All the asymmetries detected are referenced to the Talairarch and Tournoux anatomical atlas. The Cartesian co- ordinates in Talairarch space, of detected lesion, are employed along with the associated anatomical structure(s) as the indices within the content based image retrieval system. The anatomical atlas is then also utilized to isolate distinct anatomical areas that are related to a number of neurodegenerative disorders. After segmentation of the anatomical regions of interest algorithms are applied to characterize the texture of brain intensity using Gabor filters and to elucidate the mean index ratio of activation levels. These measurements are combined to produce a single feature vector that is incorporated into the content based image retrieval system. Experimental results on images with known diagnoses show that physical lesions such as head injuries and tumours can be, to a certain extent, detected correctly. Images with correctly detected and measured lesion are then retrieved from the database of images when a query pertains to the measured locale. Images with neurodegenerative disorder patterns have been indexed and retrieved via texture-based features. Retrieval accuracy is increased, for images from patients diagnosed with dementia, by combining the texture feature and mean index ratio value.
3

Pavarin, Alice. "Comparison of textural features in PET images: a phantom study." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La radiomica può essere definita come l'analisi quantitativa di immagini radiologiche derivanti, per esempio, da risonanze magnetiche e PET. L'analisi quantitativa avviene tramite software attraverso l'estrazione di features. L'analisi delle features è importante perché permette di fare previsioni, per esempio, sulla capacità del soggetto di rispondere positivamente alle cure previste o perché può aiutare nell'individuazione automatica della massa tumorale nell'immagine. Inoltre tale analisi si affianca al lavoro visivo di studio dell'immagine fatto dall'oncologo o dal radiologo. Il valore delle features cambia in base a diversi fattori, come per esempio lo scanner usato o le modalità di ricostruzione dell'immagine. Appare quindi importante, in ogni singola indagine, capire quali features sono le più robuste e quali le più variabili perché dal loro valore si dedurranno elementi diagnostici relativi all'oggetto studiato. In questa tesi si è analizzata la robustezza di 11 features in immagini PET al variare dei parametri di ricostruzione dell'immagine. Le immagini considerate provengono dallo IEO di Milano. Si sono analizzati 9 datasets e per ogni feature si è valutata la robustezza al variare dei parametri di ricostruzione, al variare delle ROI (regioni di interesse, tipicamente le masse tumorali, su cui si concentra lo studio delle features), al variare della grandezza della ROI e della grandezza delle sfere da cui le ROI sono state estratte. Inoltre si è calcolata la robustezza delle features fissando il metodo di ricostruzione di default e si è indagata la stabilità dei metodi valutando i valori delle singole features su 15 ROI della stessa grandezza (5 per ogni sfera analizzata nel fantoccio). Si è notato che alcune features (come l'entropia e l'entropia-GLCM) appaiono più stabili mentre altre features sono assai volatili (come la varianza). Si è visto inoltre che la variabilità delle features aumenta al variare della grandezza della ROI considerata.
4

Yu, Chin-Lung. "Methods for automated analysis of small-animal PET images." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1580851181&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

RAPISARDA, EUGENIO. "Improvements in quality and quantification of 3D PET images." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/28157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The spatial resolution of Positron Emission Tomography is conditioned by several physical factors, which can be taken into account by using a global Point Spread Function (PSF). In this thesis a spatially variant (radially asymmetric) PSF implementation in the image space of a 3D Ordered Subsets Expectation Maximization (OSEM) algorithm is proposed. Two different scanners were considered, without and with Time Of Flight (TOF) capability. The PSF was derived by fitting some experimentally acquired images of a Na22 cylindrical source reconstructed with an OSEM algorithm. The fitting function took into account the post-filter applied on the images, the actual position of the point source, the source dimensions and the intrinsic discretization along the axial direction due to the finite dimensions of the slices. The proposed method of measurement was also validated and demonstrated its good accuracy in building the PSF model, justifying its use. The implementation of the PSF consisted in a redefinition of the projector and backprojector of the 3D OSEM algorithm. The continuous model of the PSF has been discretized by calculating its integral for each voxel in the image, allowing for a better adaptive implementation for each specific reconstruction FOV and pixel size. The explicit expression for the transposed PSF operator was also derived, showing that - in the spatially variant case - this does not coincide with the transpose of the PSF kernel. The PSF was tested on some phantom and clinical data: the results showed improved quantitative accuracy, spatial resolution and image quality; furthermore, the combined use of TOF and PSF appeared to allow them to take advantage of each other, leading to the best results. Unfortunately, a common effect of iterative reconstruction techniques is the increase of noise as iterations proceed, due to the ill-posed nature of the reconstruction problem. This is in contrast with the requirements of a PSF-aware algorithm, since the speed of convergence is lower than in non-PSF algorithms and, therefore, more iterations would be required to reach a sufficient convergence. Another important effect observed in PSF-based reconstructions is the enhancement of regions with sharp intensity transitions. In this thesis it was demonstrated to be strongly related to the implementation of the spatial resolution recovery and, even in presence of a perfectly matched kernel, unavoidable unless an unpractical number of iterations is used. Regularization techniques have been demonstrated to be useful for taking noise under control during the reconstruction and improving the benefits from the use of the PSF information by increasing the number of iterations used. In particular, in this thesis a Bayesian variational regularization strategy has been tested and employed. Two good candidates for the use in PET practice are the Huber (or Gauss-Total Variation) and the generalized p-Gaussian priors. In this thesis a modification of the p-Gaussian prior was proposed to maintain the smoothing effect for low gradients (i.e. in background regions) and to reduce the spatial resolution loss, while retaining "natural" transitions and appearance in the image. The considered priors depend on some regularization parameters. In this thesis a figure of merit, taking into account both the qualitative and the quantitative content, was proposed to evaluate the global "detectability" of a lesion. The validation of this detectability index showed a very good correlation with the human response and, thus, justified its use to set the regularization parameters. The regularization parameters were then determined by maximizing the detectability index for each prior. This optimization was performed for a sphere with diameter 10 mm and 10 OSEM iterations. The validation of the proposed modifications was quantitative on data acquired with a NEMA IEC Body Phantom and qualitative on data relative to two oncological patients and consisted of a comparison between the standard reconstruction algorithms, the proposed algorithm, the results obtained with the p-Gaussian prior and with Gauss-Total Variation. This comparison showed an effective control of noise (but with natural appearance of the image) by the proposed prior with a contemporary good preservation of spatial resolution, contrast and definition of the activity distribution. Moreover, the proposed prior was shown to be able also to take the edge artefact under control, drastically reducing the overshoots originating at large transitions in the image. Positive results were obtained also when the regularization strategy was used in conjunction with the TOF information, suggesting a possible future employment in the PET reconstruction framework.
6

Jonsson, Sofia. "Evaluation of Methods for Obtaining an Image Derived Input Function from Dynamic PET-images." Thesis, Umeå universitet, Institutionen för fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-124426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dynamic PET is a technique to follow the uptake kinetics of radioactive labelled molecules in the human body. The kinetic behaviour may be analysed to acquire parameters, such as perfusion of blood to tissue, with the knowledge of the blood activity time curve (also called input function). This is usually measured by continuous sampling by letting the blood flow through a detector but this is both burdensome and not without risk to the patient \cite{Feng2012}. Instead, an alternative method would be to determine the input function from the PET-images and thus get an image derived input function (IDIF). In this master thesis evaluation of analytical models, tested on both experimental sampled data of a phantom and on data from actual patients, were used to determine the IDIF from small blood vessels. A phantom was built from plastic tubes and plexiglass to test and evaluate different methods. In order to get a correct IDIF one needs to correct for partial volume effect (PVE) which in small volumes of interest (VOI) gives apparent lower activity than reality. The correction can be done in a few different ways but this paper focuses on multi-target correction (MTC) which uses two or more VOIs to obtain the true activity value \cite{PVE_corrections}. The method was evaluated using data from phantom measurements where the activity was known and could be used as a reference. The phantom was constructed using ten tubes of different dimensions, a plexiglass holder and a plastic box. The result from the PVE correction turned out to be highly dependent on accurately knowing the diameter. However, when the diameter of the VOI matched the diameter of the tube the error of activity was, on average, less than 6.1 \% (less than 4.9 \% for tubes larger than 6 mm in diameter) when evaluating the measured phantom data without added background. Also, varying backgrounds were added creating different contrasts between the tubes and background. When adding background the noise in the image is increased and the results from the PVCs, when using the most accurate diameter, were less accurate with a total average activity error of 17.9 \% (11.1 \% for diameters larger than 6 mm and 22.4 \% for diameters smaller than 6 mm). As a conclusion, the size of the blood vessel needs to be accurately known in order for the PVC to give the most accurate result. Also using vessels larger than 6 mm is beneficial.
7

Sims, John Andrew. "Directional analysis of cardiac left ventricular motion from PET images." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-05092017-093020/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Quantification of cardiac left ventricular (LV) motion from medical images provides a non-invasive method for diagnosing cardiovascular disease (CVD). The proposed study continues our group\'s line of research in quantification of LV motion by applying optical flow (OF) techniques to quantify LV motion in gated Rubidium Chloride-82Rb (82Rb) and Fluorodeoxyglucose-18F (FDG) PET image sequences. The following challenges arise from this work: (i) the motion vector field (MVF) should be made as accurate as possible to maximise sensitivity and specificity; (ii) the MVF is large and composed of 3D vectors in 3D space, making visual extraction of information for medical diagnosis dffcult by human observers. Approaches to improve the accuracy of motion quantification were developed. While the volume of interest is the region of the MVF corresponding to the LV myocardium, non-zero values of motion exist outside this volume due to artefacts in the motion detection method or from neighbouring structures, such as the right ventricle. Improvements in accuracy can be obtained by segmenting the LV and setting the MVF to zero outside the LV. The LV myocardium was automatically segmented in short-axis slices using the Hough circle transform to provide an initialisation to the distance regularised level set evolution algorithm. Our segmentation method attained Dice similarity measure of 93.43% when tested over 395 FDG slices, compared with manual segmentation. Strategies for improving OF performance at motion boundaries were investigated using spatially varying averaging filters, applied to synthetic image sequences. Results showed improvements in motion quantification accuracy using these methods. Kinetic Energy Index (KEf), an indicator of cardiac motility, was used to assess 63 individuals with normal and altered/low cardiac function from a 82Rb PET image database. Sensitivity and specificity tests were performed to evaluate the potential of KEf as a classifier of cardiac function, using LV ejection fraction as gold standard. A receiver operating characteristics curve was constructed, which provided an area under the curve of 0.906. Analysis of LV motion can be simplified by visualisation of directional motion field components, namely radial, rotational (or circumferential) and linear, obtained through automated decomposition. The Discrete Helmholtz Hodge Decomposition (DHHD) was used to generate these components in an automated manner, with a validation performed using synthetic cardiac motion fields from the Extended Cardiac Torso phantom. Finally, the DHHD was applied to OF fields from gated FDG images, allowing an analysis of directional components from an individual with normal cardiac function and a patient with low function and a pacemaker fitted. Motion field quantification from PET images allows the development of new indicators to diagnose CVDs. The ability of these motility indicators depends on the accuracy of the quantification of movement, which in turn can be determined by characteristics of the input images, such as noise. Motion analysis provides a promising and unprecedented approach to the diagnosis of CVDs.
A quantificação do movimento cardíaco do ventrículo esquerdo (VE) a partir de imagens médicas fornece um método não invasivo para o diagnóstico de doenças cardiovasculares (DCV). O estudo aqui proposto continua na mesma linha de pesquisa do nosso grupo sobre quantificação do movimento do VE por meio de técnicas de fluxo óptico (FO), aplicando estes métodos para quantificar o movimento do VE em sequências de imagens associadas às substâncias de cloreto de rubídio-82Rb (82Rb) e fluorodeoxiglucose-18F (FDG) PET. Com a extração dos campos vetoriais surgiram os seguintes desafios: (i) o campo vetorial de movimento (motion vector field, MVF) deve ser feito da forma mais precisa possível para maximizar a sensibilidade e especificidade; (ii) o MVF é extenso e composto de vetores 3D no espaço 3D, dificultando a análise visual de informações por observadores humanos para o diagnóstico médico. Foram desenvolvidas abordagens para melhorar a precisão da quantificação de movimento, considerando que o volume de interesse seja a região do MVF correspondente ao miocárdio do VE, em que valores de movimento não nulos existem fora deste volume devido aos artefatos do método de detecção de movimento ou de estruturas vizinhas, como o ventrículo direito. As melhorias na precisão foram obtidas segmentando o VE e ajustando os valores de MVF para zero fora do VE. O miocárdio VE foi segmentado automaticamente em fatias de eixo curto usando a Transformada de Hough na detecção de círculos para fornecer uma inicialização ao algoritmo de curvas de nível, um tipo de modelo deformável. A segmentação automática do VE atingiu 93,43% de medida de similaridade Dice, quando foi testado em 395 fatias de eixo menor de FDG, comparado com a segmentação manual. Estratégias para melhorar o desempenho do algoritmo OF nas bordas de movimento foram investigadas usando spatially varying averaging filters, aplicados em seqüências de imagens sintéticas. Os resultados mostraram melhorias na precisão de quantificação de movimento utilizando estes métodos. O Índice de Energia Cinética (KEf), um indicador de motilidade cardíaca, foi utilizado para avaliar 63 sujeitos com função cardíaca normal e alterada / baixa de uma base de dados de imagens PET de 82Rb. Foram realizados testes de sensibilidade e especificidade para avaliar o potencial de KEf para classificar a função cardíaca, utilizando a fração de ejeção do VE como padrão ouro. Foi construída uma curva ROC, que proporcionou uma área sob a curva de 0,906. A análise do movimento do VE pode ser simplificada pela visualização de componentes de campo de movimento direcional, ou seja, radial, rotacional (ou circunferencial) e linear, obtidos por decomposição automatizada. A decomposição discreta de Helmholtz Hodge (DHHD) foi utilizada para gerar estes componentes de forma automatizada, com uma validação utilizando campos de movimento cardíaco sintéticos a partir do conjunto Extended Cardiac Torso Phantom. Finalmente, o método DHHD foi aplicado a campos de FO, criado a partir de imagens FDG, permitindo uma análise de componentes direcionais de um indivíduo com função cardíaca normal e um paciente com baixa função e utilizando um marca-passo. A quantificação do campo de movimento a partir de imagens PET possibilita o desenvolvimento de novos indicadores para diagnosticar DCVs. A capacidade destes indicadores de motilidade depende na precisão da quantificação de movimento que, por sua vez, pode ser determinado por características das imagens de entrada como ruído. A análise de movimento fornece um promissor e sem precedente método para o diagnóstico de DCVs.
8

Farinha, Ricardo Jorge Pires Correia. "Segmentation of striatal brain structures from high resolution pet images." Master's thesis, FCT - UNL, 2008. http://hdl.handle.net/10362/2036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dissertation presented at the Faculty of Science and Technology of the New University of Lisbon in fulfillment of the requirements for the Masters degree in Electrical Engineering and Computers
We propose and evaluate fully automatic segmentation methods for the extraction of striatal brain surfaces (caudate, putamen, ventral striatum and white matter), from high resolution positron emission tomography (PET) images. In the preprocessing steps, both the right and the left striata were segmented from the high resolution PET images. This segmentation was achieved by delineating the brain surface, finding the plane that maximizes the reflective symmetry of the brain (mid-sagittal plane) and, finally, extracting the right and left striata from both hemisphere images. The delineation of the brain surface and the extraction of the striata were achieved using the DSM-OS (Surface Minimization – Outer Surface) algorithm. The segmentation of striatal brain surfaces from the striatal images can be separated into two sub-processes: the construction of a graph (named “voxel affinity matrix”) and the graph clustering. The voxel affinity matrix was built using a set of image features that accurately informs the clustering method on the relationship between image voxels. The features defining the similarity of pairwise voxels were spatial connectivity, intensity values, and Euclidean distances. The clustering process is treated as a graph partition problem using two methods, a spectral (multiway normalized cuts) and a non-spectral (weighted kernel k-means). The normalized cuts algorithm relies on the computation of the graph eigenvalues to partition the graph into connected regions. However, this method fails when applied to high resolution PET images due to the high computational requirements arising from the image size. On the other hand, the weighted kernel k-means classifies iteratively, with the aid of the image features, a given data set into a predefined number of clusters. The weighted kernel k-means and the normalized cuts algorithm are mathematically similar. After finding the optimal initial parameters for the weighted kernel k-means for this type of images, no further tuning is necessary for subsequent images. Our results showed that the putamen and ventral striatum were accurately segmented, while the caudate and white matter appeared to be merged in the same cluster. The putamen was divided in anterior and posterior areas. All the experiments resulted in the same type of segmentation, validating the reproducibility of our results.
9

Bieth, Marie. "Kinetic analysis and inter-subject registration of brain PET images." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Positron emission tomography (PET) imaging is becoming increasingly popular for understanding brain function. This thesis addresses two problems related to PET images: binding potential (BP) computation and pairwise PET image registration. We first investigate the influence of several computational choices on the calculation of binding potential maps in brain PET. Our work uses simulated data and allows us to provide some benchmarks for the choices to make for BP computation, which is an important step towards fully automated MR independent BP estimation. We then introduce a new method for pairwise dynamic PET image registration that is derived from the 3D diffeomorphic log-demons algorithm, and demonstrate an improvement over existing methods. We also present a high-resolution [11C]raclopride PET template built from 35 subjects scanned on the High Resolution Research Tomograph. As this is the highest resolution PET scanner available at the time, to the best of our knowledge, this template is the best quality representation of a PET [11C]raclopride image produced to date.
L'imagerie à émission de positrons est de plus en plus utilisée pour comprendre le fonctionnement du cerveau. Ce mémoire aborde deux sujets liés àces images: le calcul du potentiel de liaison et l'alignement de deux images. Nous étudions tout d'abord l'influence de certains choix d'implémentation sur les estimations de potentiel de liaison. Ces travaux effectués sur des données simulées nous permettent de donner des points de repère concernant les choix à faire pour calculer le potentiel de liaison, ce qui constitue un pas important vers un calcul du potentiel de liaison entièrement automatisé etindépendant d'images à résonance magnétique. Nous introduisons ensuite une nouvelle méthode pour l'alignement de deux images de tomographie à émission de positrons. Cette méthode est adaptée de l'algorithme des log-démons difféomorphiques 3D. Nous montrons que notre méthode donne de meilleurs résultats que des méthodes existantes. Nous présentons aussi un modèle de haute résolution pour l'imagerie à émissionde positrons utilisant la [11C]raclopride. Ce modèle est construit à partir de 35sujets scannés sur le tomographe de recherche à haute résolution (High Resolution Research Tomograph). Comme il s'agit du tomographe de plus haute résolution disponible à ce jour, à notre connaissance, notre modèle est l'image de raclopride de plus haute résolution jamais produite.
10

Wang, Jiali. "Motion Correction Algorithm of Lung Tumors for Respiratory Gated PET Images." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/96.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
11

Wang, Jiabin. "Variational Bayes inference based segmentation algorithms for brain PET-CT images." Thesis, The University of Sydney, 2012. https://hdl.handle.net/2123/29251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dual modality PET-CT imaging can provide aligned anatomical (CT) and functional (PET) images in a single scanning session, and has nowadays steadily replaced single modality PET imaging in clinical practice. The enormous number of PET-CT images produced in hospitals are currently analysed almost entirely through visual inspection on a slice-by-slice basis, which requires a high degree of skill and concentration, and is time-consuming, expensive, prone to operator bias, and unsuitable for the processing large-scale studies. Computer-aided diagnosis, where image segmentation is an essential step, would enable doctors and researchers to bypass these issues. However, most medical image segmentation methods are designed for single modality images. In this thesis, the automated segmentation of dual-modality brain PET-CT images has been comprehensively investigated by using variational learning techniques. Two novel statistical segmentation algorithms, namely the DE-VEM algorithm and PA-VEM algorithm, have been proposed to delineate brain PET-CT images into grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF). In statistical image segmentation, voxel values are usually characterised by probabilistic models, whose parameters can be estimated by using the maximum likelihood estimation, and the optimal segmentation result is regarded as the one that maximises the posterior probability. Despite of their simplicity, statistical approaches intrinsically suffer from overfitting and local convergence. In variational Bayes inference, statistical model parameters are further assumed to be random variables to improve the model's flexibility. Instead of directly estimating the posterior probability, variational learning techniques use a variational distribution to approximate the posterior probability, and thus are able to overcome the drawback of overfitting. The most widely used variational learning technique is the variational expectation maximisation (VEM) algorithm. As a natural extension of the traditional expectation maximisation (EM) algorithm, the VEM algorithm is also a two-step iterative process and still faces the risk of being trapped in a local maximum and the difficulty of incorporating prior knowledge. Inspired by the fact that global optimisation techniques, such as the genetic algorithm, have been successfully applied to replace the EM algorithm in the maximum-likelihood estimation of probabilistic models, this research combines the differential evolution (DE) algorithm and VEM algorithm to solve the optimisation problem involved in the variational Bayes inference, and thus proposes the DE-VEM algorithm for brain PET -CT image segmentation. In this algorithm, the DE scheme is introduced to search a global solution and the VEM scheme is employed to perform a local search. Since DE is population-based global optimisation technique and has proven itself in a variety of applications with good, the DE­YEM algorithm has the potential to avoid local convergence. The proposed algorithm has been compared with the YEM algorithm and the segmentation function in the statistical parametric mapping (SPM, Version 2008) package in 21 clinical brain PET -CT images. My results show that the DE-YEM algorithm outperforms the other two algorithms and can produce accurate segmentation of brain PET-CT images. Meanwhile, to incorporate the prior anatomical information into the variational learning based brain image segmentation process, the probabilistic brain atlas is generated and used to guide the search of an optimal segmentation result through performing the YEM iteration. As a result, the probabilistic atlas based YEM (PA-YEM) algorithm is developed to allow each voxel to have an adaptable prior probability of belonging to each class. This algorithm has been compared to the segmentation functions in the SPM8 package and the EMS package, the DE-YEM algorithm, and the DEV algorithm in 21 clinical brain PET-CT images. My results demonstrate that the proposed PA-YEM algorithm can substantially improve the accuracy of segmenting brain PET -CT images. Although this research uses the brain PET -CT images as case studies, the theoretical outcomes are generic and can be extended to the segmentation of other dual-modality medical images. The future work in this area should be focused mainly on improving the computational efficiency of variational learning based image segmentation approaches.
12

Potesil, Vaclav. "Building computational atlases from databases of whole-body clinical PET/CT images." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Medical imaging has revolutionized cancer care and its use has grown massively over the past several decades. Images are increasingly stored in large digital image repositories such as hospital Picture Archiving and Communication System, which will hopefully provide a wealth of information on patient conditions and therapy outcomes as cancer diagnosis and therapy moves from 'one size fits all' to more personalized approaches tailored to each particular patient. However, converting the unstructured avalanche of data at thousands of different hospitals into clinically valuable biomarkers and tools requires that the images of different patients can be compared and efficiently searched. Our research aims to develop novel methods to compare whole-body scans of multiple patients; methods which incorporate 'intelligent' prior knowledge of the internal structure of the human body, as opposed to current methods of image registration which mostly rely on matching the voxel intensities and disregard their anatomical meaning. We develop computational methods for accurate and reliable automated localization of anatomical structures in whole-body images, which will help to automate key steps in cancer diagnosis and radiation treatment planning and save expensive clinicians' time while improving the reliability of their decisions. Conventional approaches to determining spatial correspondences between pairs or sets of images in medical imaging typically rely on image registration methods. There have been considerable advances in registration of multiple images of the same patient taken at different time-points, known as longitudinal studies. However, conventional methods, which rely on optimizing certain integral functions of voxel values over the entire image, are unreliable when applied to aligning whole-body images of different patients. Whole-body Computed Tomography (CT) images contain many different anatomical structures whose physical attributes and consequent appearance can be highly variable between patients. This substantial, but normal, variability is further increased by the presence of pathologies such as tumours and non-cancerous diseases, surgical interventions and degenerative changes due to aging as well as different patterns of contrast agent uptake. Conventional registration methods often get trapped in local minima that abound in such images, resulting in unreliable and inaccurate anatomical correspondences. The methods developed in this thesis tackle the problem of inter-patient registration by incorporating prior anatomical knowledge into parts-based graphical models that accurately and reliably localize arbitrary skeletal and soft-tissue anatomical landmarks in whole-body clinical oncology scans. We optimize parts-based graphical models called Pictorial Structures for accurate and reliable landmark localization in CT images and introduce novel methods that replace standard population models by models personalized to the particular patient. We also propose methods that further improve landmark localization while minimizing, as far as possible, the high costs of ground-truth annotation by expert radiologists. We do this by automatically discovering new landmark correspondences from a database of partially annotated images. The performance of the algorithms developed in my thesis is evaluated on a large database of clinical lung cancer PET/CT scans, showing superior accuracy and reliability of landmark localization compared to conventional methods.
13

Pacheco, Edward Flórez. "Quantificação da dinâmica de estruturas em imagens de medicina nuclear na modalidade PET." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-08052012-114807/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A presença que tem hoje a Medicina Nuclear como modalidade de obtenção de imagens médicas é muito importante e um dos principais procedimentos utilizados hoje nos centros de saúde, tendo como grande vantagem a capacidade de conseguir analisar o comportamento metabólico do paciente, fazendo possíveis diagnósticos precoces. Este projeto está baseado em imagens médicas obtidas através da modalidade PET (Positron Emission Tomography) a qual está tendo uma crescente difusão e aceitação. Para isso, temos desenvolvido uma estrutura integral de processamento de imagens tridimensionais PET, a qual está constituída por etapas consecutivas que se iniciam na obtenção das imagens padrões (gold standard), sendo utilizados volumes simulados ou phantoms do Ventrículo Esquerdo do Coração criadas como parte do projeto, assim como geradas a partir do software NCAT-4D. A seguir, nos volumes simulados, é introduzido ruído quântico tipo Poisson que é o ruído característico das imagens PET e feita uma análise que busca certificar que o ruído utilizado corresponde efetivamente ao ruído Poisson. Em sequência é executada a parte de pré-processamento, utilizando para este fim, um conjunto de filtros tais como o filtro da mediana, o filtro da Gaussiana ponderada e o filtro que mistura os conceitos da Transformada de Anscombe e o filtro pontual de Wiener. Posteriormente é aplicada a etapa de segmentação que é considerada a parte central da sequência de processamento. O processo de segmentação é baseado na teoria de Conectividade Fuzzy e para isso temos implementado quatro diferentes abordagens: Algoritmo Genérico, Algoritmo LIFO, Algoritmo kTetaFOEMS e o Algoritmo utilizando Pesos Dinâmicos. Sendo que os três primeiros algoritmos utilizam pesos específicos selecionados pelo usuário, foi preciso efetuar uma análise para determinar os melhores pesos de segmentação que se reflitam numa segmentação mais eficiente. Finalmente, para terminar a estrutura de processamento, um procedimento de avaliação foi utilizado como métrica para obter quantitativamente três parâmetros (Verdadeiro Positivo, Falso Positivo e Máxima Distância) que permitiram conhecer o nível de eficiência e precisão de nosso processo e do projeto em geral. Constatamos que os algoritmos implementados (filtros e algoritmos de segmentação) são bastante robustos e atingem ótimos resultados chegando-se a obter, para o caso do volume do Ventrículo Esquerdo simulado, taxas de VP e FP na ordem de 98.49 ± 0.27% e 2,19 ± 0.19%, respectivamente. Com o conjunto de procedimentos e escolhas feitas ao longo da estrutura de processamento, encerramos o projeto com a análise de um grupo de volumes produto de um exame PET real, obtendo a quantificação destes volumes.
The usefulness of Nuclear medicine nowadays as a modality to obtain medical images is very important, and it has turned into one of the main procedures utilized in Health Care Centers. Its great advantage is to analyze the metabolic behavior of the patient, by allowing early diagnosis. This project is based on medical images obtained by the PET modality (Positron Emission Tomography), which has won wide acceptance. Thus, we have developed an integral framework for processing Nuclear Medicine three-dimensional images of the PET modality, which is composed of consecutive steps that start with the generation of standard images (gold standard) by using simulated images or phantoms of the Left Ventricular Heart that were generated in this project, such as the ones obtained from the NCAT-4D software. Then Poisson quantum noise is introduced into the whole volume to simulate the characteristic noises in PET images and an analysis is performed in order to certify that the utilized noise is the Poisson noise effectively. Subsequently, the pre-processing is executed by using specific filters, such as the median filter, the weighted Gaussian filter, and the filter that joins the concepts of Anscombe Transformation and the Wiener filter. Then the segmentation, which is considered the most important and central part of the whole process, is implemented. The segmentation process is based on the Fuzzy Connectedness theory and for that purpose four different approaches were implemented: Generic algorithm, LIFO algorithm, kTetaFOEMS algorithm, and Dynamic Weight algorithm. Since the first three algorithms used specific weights that were selected by the user, an extra analysis was performed to determine the best segmentation constants that would reflect an accurate segmentation. Finally, at the end of the processing structure, an assessment procedure was used as a measurement tool to quantify some parameters that determined the level of efficiency and precision of our process and project. We have verified that the implemented algorithms (filters and segmentation algorithms) are fairly robust and achieve optimal results, assist to obtain, in the case of the Left Ventricular simulated, TP and FP rates in the order of 98.49 ± 0.27% and 2.19 ± 0.19%, respectively. With the set of procedures and choices made along of the processing structure, the project was concluded with the analysis of a volumes group from a real PET exam, obtaining the quantification of the volumes.
14

Desseroit, Marie-Charlotte. "Caractérisation et exploitation de l'hétérogénéité intra-tumorale des images multimodales TDM et TEP." Thesis, Brest, 2016. http://www.theses.fr/2016BRES0129/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’imagerie multi-modale Tomographie par émission de positons (TEP)/ Tomodensitométrie(TDM) est la modalité d’imagerie la plus utilisée pour le diagnostic et le suivi des patients en oncologie. Les images obtenues par cette méthode offrent une cartographie à la fois de la densité des tissus (modalité TDM) mais également une information sur l’activité métabolique des lésions tumorales (modalité TEP). L’analyse plus approfondie de ces images acquises en routine clinique a permis d’extraire des informations supplémentaires quant à la survie du patient ou à la réponse au(x) traitement(s). Toutes ces nouvelles données permettent de décrire le phénotype d’une lésion de façon non invasive et sont regroupées sous le terme de Radiomics. Cependant, le nombre de paramètres caractérisant la forme ou la texture des lésions n’a cessé d’augmenter ces dernières années et ces données peuvent être sensibles à la méthode d’extraction ou encore à la modalité d’imagerie employée. Pour ces travaux de thèse, la variabilité de ces caractéristiques a donc été évaluée sur les images TDM et TEP à l’aide d’une cohorte test-retest : pour chaque patient, deux examens effectués dans les mêmes conditions, espacés d’un intervalle de l’ordre de quelques jours sont disponibles. Les métriques reconnues comme fiables à la suite de cette analyse sont exploitées pour l’étude de la survie des patients dans le cadre du cancer du poumon. La construction d’un modèle pronostique à l’aide de ces métriques a permis, dans un premier temps, d’étudier la complémentarité des informations fournies par les deux modalités. Ce nomogramme a cependant été généré par simple addition des facteurs de risque. Dans un second temps, les mêmes données ont été exploitées afin de construire un modèle pronostique à l’aide d’une méthode d’apprentissage reconnue comme robuste : les machines à vecteurs de support ou SVM (support vector machine). Les modèles ainsi générés ont ensuite été testés sur une cohorte prospective en cours de recrutement afin d’obtenir des résultats préliminaires sur la robustesse de ces nomogrammes
Positron emission tomography (PET) / Computed tomography (CT) multi-modality imaging is the most commonly used imaging technique to diagnose and monitor patients in oncology. PET/CT images provide a global tissue density description (CT images) and a characterization of tumor metabolic activity (PET images). Further analysis of those images acquired in clinical routine supplied additional data as regards patient survival or treatment response. All those new data allow to describe the tumor phenotype and are generally grouped under the generic name of Radiomics. Nevertheless, the number of shape descriptors and texture features characterising tumors have significantly increased in recent years and those parameters can be sensitive to exctraction method or whether to imaging modality. During this thesis, parameters variability, computed on PET and CT images, was assessed thanks to a test-retest cohort : for each patient, two groups of PET/CT images, acquired under the same conditions but generated with an interval of few minutes, were available. Parameters classified as reliable after this analysis were exploited for survival analysis of patients in the context of non-small cell lug cancer (NSCLC).The construction of a prognostic model with those metrics permitted first to study the complementarity of PET and CT texture features. However, this nomogram has been generated by simply adding risk factors and not with a robust multi-parametric analysis method. In the second part, the same data were exploited to build a prognostic model using support vector machine (SVM) algorithm. The models thus generated were then tested on a prospective cohort currently being recruited to obtain preliminary results as regards the robustness of those nomograms
15

Pacheco, Edward Florez. "Análise da dinâmica e quantificação metabólica de imagens de medicina nuclear na modalidade PET/CT." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-24062016-141858/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A presença da Medicina Nuclear como modalidade de obtenção de imagens médicas é um dos principais procedimentos utilizados hoje nos centros de saúde, tendo como grande vantagem a capacidade de analisar o comportamento metabólico do paciente, traduzindo-se em diagnósticos precoces. Entretanto, sabe-se que a quantificação em Medicina Nuclear é dificultada por diversos fatores, entre os quais estão a correção de atenuação, espalhamento, algoritmos de reconstrução e modelos assumidos. Neste contexto, o principal objetivo deste projeto foi melhorar a acurácia e a precisão na análise de imagens de PET/CT via processos realísticos e bem controlados. Para esse fim, foi proposta a elaboração de uma estrutura modular, a qual está composta por um conjunto de passos consecutivamente interligados começando com a simulação de phantoms antropomórficos 3D para posteriormente gerar as projeções realísticas PET/CT usando a plataforma GATE (com simulação de Monte Carlo), em seguida é aplicada uma etapa de reconstrução de imagens 3D, na sequência as imagens são filtradas (por meio do filtro de Anscombe/Wiener para a redução de ruído Poisson caraterístico deste tipo de imagens) e, segmentadas (baseados na teoria Fuzzy Connectedness). Uma vez definida a região de interesse (ROI) foram produzidas as Curvas de Atividade de Entrada e Resultante requeridas no processo de análise da dinâmica de compartimentos com o qual foi obtida a quantificação do metabolismo do órgão ou estrutura de estudo. Finalmente, de uma maneira semelhante imagens PET/CT reais fornecidas pelo Instituto do Coração (InCor) do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (HC-FMUSP) foram analisadas. Portanto, concluiu-se que a etapa de filtragem tridimensional usando o filtro Anscombe/Wiener foi relevante e de alto impacto no processo de quantificação metabólica e em outras etapas importantes do projeto em geral.
The presence of Nuclear Medicine as a medical imaging modality is one of the main procedures utilized nowadays in medical centers, and the great advantage of that procedure is its capacity to analyze the metabolic behavior of the patient, resulting in early diagnoses. However, the quantification in Nuclear Medicine is known to be complicated by many factors, such as degradations due to attenuation, scattering, reconstruction algorithms and assumed models. In this context, the goal of this project is to improve the accuracy and the precision of quantification in PET/CT images by means of realistic and well-controlled processes. For this purpose, we proposed to develop a framework, which consists in a set of consecutively interlinked steps that is initiated with the simulation of 3D anthropomorphic phantoms. These phantoms were used to generate realistic PET/CT projections by applying the GATE platform (with Monte Carlo simulation). Then a 3D image reconstruction was executed, followed by a filtering process (using the Anscombe/Wiener filter to reduce Poisson noise characteristic of this type of images) and, a segmentation process (based on the Fuzzy Connectedness theory). After defining the region of interest (ROI), input activity and output response curves are required for the compartment analysis in order to obtain the Metabolic Quantification of the selected organ or structure. Finally, in the same manner real images provided from the Heart Institute (InCor) of Hospital das Clínicas, Faculty of Medicine, University of São Paulo (HC-FMUSP) were analysed. Therefore, it is concluded that the three-dimensional filtering step using the Ascombe/Wiener filter was preponderant and had a high impact on the metabolic quantification process and on other important stages of the whole project.
16

Olsson, Johan. "Automated Method for Generation of Input Function in PET Studies using MVW-PC Images." Thesis, Uppsala University, Department of Information Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-101163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Modeling is an approach for extracting quantitative values from PET. The signal from a reference region or from blood samples is used as reference. Since blood sampling is risky, this report presents an automated method based on MVW-PCA for using blood data from the images.

The study was performed on clinical PET data from several human brains using the tracer PIB. Two veins were found in a MVW-PC and an average of the TACs from the relevant locations was formed. Finally, a correcting function was calculated.

The curves generated from the image data were very similar to the curves generated from blood samples, with the largest errors in the beginning of the scan.

The used method shows potential for generating very good results if worked onmore. One of the strengths of the approach is that it is not limited to a specific tracer or time protocol, since the MVW-PC will be chosen depending on the weights for the first 60 seconds.

17

Razifar, Pasha. "Novel Approaches for Application of Principal Component Analysis on Dynamic PET Images for Improvement of Image Quality and Clinical Diagnosis." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Van, Tol Markus Lane. "A graph-based method for segmentation of tumors and lymph nodes in volumetric PET images." Thesis, University of Iowa, 2014. https://ir.uiowa.edu/etd/2290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For radiation treatment of cancer and image-based quantitative assessment of treatment response, target structures like tumors and lymph nodes need to be segmented. In current clinical practice, this is done manually, which is time consuming and error-prone. To address this issue, a semi-automated graph-based segmentation approach was developed. It was validated with 60 real datasets, segmented by two users manually and with this new algorithm, and 44 scans of a phantom dataset. The results showed a statistically significant improvement in intra- and interoperator consistency of segmentations, a statistically significant improvement in speed of segmentation, and reasonable accuracy against consensus images and phantoms. As such, the algorithm can be applied in cases that otherwise would use manual segmentation.
19

Wang, Hesheng. "Multimodality Images Analysis for Photodynamic Therapy of Prostate Cancer in Mouse Models." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1251311096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Cupparo, Ilaria. "Region growing and fuzzy C-means algorithm segmentation for PET images of head-neck tumours." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18020/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this work, performed at Azienda Ospedialiero Universitaria in Modena, is the implementation and validation of autosegmentation methods of head and neck (H&N) tumor PET images. These autosegmentation processes are important mostly to overcome the problems of manual segmentation, performed by radiotherapist physician, regarding the contouring time (that can reach more than two hours) and the intra-observer and inter-observer variability. Fuzzy C-means (FCM) and Region Growing (RG) algorithms were developed in a MATLAB GUI that allows to choice iteratively the different steps necessary for a good segmentation. Pre-processing operations were previously applied to improve image quality: a gaussian filter to remove noise and an opening morphological operation to uniform background. NEMA IEC body phantom, acquired with four hot spheres and two cold spheres, was firstly used to test the two methods in known condition. The accuracy of processes was evaluated considering the volume change between calculated and theoretical volume that is always null within error and reaches the highest value in the case of the smallest sphere because of partial volume effect, generally decreasing as sphere size increases. Afterwards, 16 PET images studies of H&N tumors were used for clinical test of algorithms. The efficiency was estimated using two quantitative coefficients: Dice Similarity Index (DSC) and Average Hausdorff Distance (AHD). Mean DSC and AHD values, obtained mediating on all cases, are within literature threshold (0.6 for DSC and about 16 mm for AHD). Contouring time, required to segment all slices of each case, changes from few seconds in FCM to some minutes in RG, always remaining inferior to manual segmentation time. The results are satisfactory, however, they could be improved increasing the number of patients and testing the variability between more experts. FCM could be also applied to lymphomas to test the efficiency in the segmentation of displaced regions.
21

Aoki, Suely Midori. "Uma proposta para avaliação do desempenho de câmaras PET/SPECT." Universidade de São Paulo, 2002. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-16072013-160821/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A tomografia por emissão de pósitrons (\"Positron Emission Tomography\" - PET) é uma técnica para obtenção de imagens tomográficas em Medicina Nuclear que permite o estudo da função e do metabolismo do corpo humano em diversos problemas clínicos, através do uso de fármacos marcados por radionuclídeos emissores de pósitrons. As aplicações mais frequentes ocorrem em oncologia, neurologia e cardiologia, através da análise qualitativa e quantitativa dessas imagens. Atualmente, a PET é realizada de duas maneiras: através de sistemas constituídos por anéis formados por alguns milhares de detectores operando em coincidência, chamados de sistemas dedicados; ou com o uso de câmaras PET /SPECT, formadas por dois detectores de cintilação em coincidência, que também servem para estudos com radionuclídeos emissores de fóton único (\"Single Photon Emission Computed Tomography\" - SPECT). O desenvolvimento desses sistemas PET /SPECT tornou viáveis os estudos com a fluor-deoxiglicose, [18 ANTPOT. F]-FDG, um fármaco marcado com 18 ANTIPOT. F (emissor de pósitrons com 109 minutos de meia-vida física), para um número grande de clínicas e hospitais, principalmente por estes serem de uma tecnologia economicamente mais acessível que os realizados com a PET dedicada. Neste presente trabalho, desenvolveu-se uma metodologia para caracterizar e avaliar um sistema PET /SPECT com dois detectores de cintilação e dispositivo com duas fontes pontuais de Cs-137, destinado à obtenção das imagens de transmissão para a correção de atenuação dos fótons. Ela se baseia em adaptações dos testes convencionais de câmaras SPECT, descritos no IAEA TecDoc - 602 - 1991 (\"lnternational Atomic Energy Agency\" - IAEA), e de sistemas PET dedicados, publicados no NEMA NU 2- 1994 (\"National Electrical Manufacturers Association\"NEMA). O resultado foi organizado em forma de roteiros que foram testados em uma câmara da ADAC Laboratories/Philips, a VertexlM - Plus EPIClMJMCDlM - AC, instalada no Serviço de Radioisótopos do lnCor - HCFMUSP (Instituto do Coração - Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo). Esta câmara foi a primeira instalada no Brasil e está sendo utilizada, predominantemente, para estudos oncológicos e de viabilidade miocárdica. O radiofármaco utilizado na obtenção das imagens foi a [18F]-FDG, fornecida regularmente pelo IPEN/CNEN-SP (Instituto de Pesquisas Energéticas e Nucleares/Comissão Nacional de Energia Nuclear - São Paulo), e a reconstrução tomográfica foi realizada com o software próprio do sistema, utilizando-se os parâmetros padrão dos protocolos clínicos. Foram utilizadas fontes pontuais suspensas no ar para as medidas de resolução espacial transversal e lineares imersas na água para as de fração de espalhamento e sensibilidade. Na avaliação da sensibilidade, uniformidade, taxa de eventos verdadeiros, taxa de eventos aleatórios e tempo morto do sistema eletrônico, foram feitas imagens de um simulador físico construído especialmente para o presente trabalho, a partir das instruções da publicação NEMA NU 2 - 1994 para sistemas PET dedicados. A acurácia da correção de atenuação foi verificada através das imagens do simulador físico citado com a inserção de três cilindros de densidades diferentes: água, ar e Teflon. Os roteiros deste trabalho poderão servir de guia para Programas de Controle e Garantia de Qualidade e avaliação da performance de sistemas PET /SPECT com dois detectores de cintilação em coincidência. A implantação destes roteiros pelos centros clínicos que utilizam este tipo de equipamento aumentará a qualidade e a confiabilidade nas imagens resultantes, assim como na sua quantificação.
Positron emission tomography, PET, is a Nuclear Medicine technique that allows the study of human body\'s function and metabolism in many clinical problems, with the help of pharmaceuticals labeled with positron emitters. The most frequent applications occur in oncology, neurology and cardiology, through qualitative and quantitative analysis of these images. Currently, PET is performed in two manners: by using dedicated systems, consisted of rings of thousands of detectors operating in coincidence; or with the use of PET /SPECT cameras, formed by two scintillation detectors in coincidence, which are also used in SPECT studies (single photon emission tomography). The development of PET /SPECT systems made possible the studies with fluor-deoxiglucose, [18F]-FDG, a pharmaceutical labeled with 18F (positron emitter with 109 minutes physical half-life), for a large number of clinics and hospitals, mainly due to their economical accessibility when compared to the dedicated PET studies. In this present work, a method was developed for characterizing and evaluating a PET /SPECT system with two scintillation detectors and device with two point sources of 137Cs, designed to obtain the transmission images for the photon attenuation correction. lt is based on adaptations of the conventional tests of SPECT cameras, described in IAEA TecDoc - 602 - 1991 (\"international Atomic Energy Agency \" - IAEA), and those for dedicated PET systems, published in NEMA NU 2 - 1994 (\"National Electrical Manufacturers Association \" - NEMA). The results were organized in a set of testing protocols and tested in the ADAC Laboratories/Philips camera, the VertexlM - Plus EPIClM/MCDlM - AC, installed in the Radioisotopes Service of lnCor - HCFMUSP (Instituto do Coração - Hospital das clínicas da Faculdade de Medicina da Universidade de São Paulo). This camera was the first one installed in Brazil and is being used, predominantly, for oncological studies and miocardial viability. The radiopharmaceutical used was [18F]-FDG, supplied regularly by IPEN/CNEN-SP (Instituto de Pesquisas Energéticas e Nucleares I Comissão Nacional de Energia Nuclear - São Paulo), and the tomographic reconstruction was performed with the system software, using the standard parameters of the clinical protocols. Point sources suspended in air were used in the measurements of spatial resolution and linear sources immersed in water for scattering fraction and sensitivity measurements. In the evaluation of sensitivity, uniformity, true events, random events and dead time of the electronic system, a phantom was constructed specifically for the present work, from the instructions of NEMA NU 2 - 1994 for dedicated PET systems. The accuracy of the attenuation correction was verified from the images of the phantom with three inserts of different densities: water, air and Teflon. The resultant protocols can serve as a guideline for Programs of Quality Control and Assurance, as well as for the evaluation of the performance of PET /SPECT systems with two scintillation detectors in coincidence. lf implemented by clinical centers that use this type of equipment, it will enhance the quality and confidence of the resulting images, as well as their quantification.
22

Kumar, Ashnil. "A graph-based approach for the retrieval of multi-modality medical images." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Medical imaging has revolutionised modern medicine and is now an integral aspect of diagnosis and patient monitoring. The development of new imaging devices for a wide variety of clinical cases has spurred an increase in the data volume acquired in hospitals. These large data collections offer opportunities for search-based applications in evidence-based diagnosis, education, and biomedical research. However, conventional search methods that operate upon manual annotations are not feasible for this data volume. Content-based image retrieval (CBIR) is an image search technique that uses automatically derived visual features as search criteria and has demonstrable clinical benefits. However, very few studies have investigated the CBIR of multi-modality medical images, which are making a monumental impact in healthcare, e.g., combined positron emission tomography and computed tomography (PET-CT) for cancer diagnosis. In this thesis, we propose a new graph-based method for the CBIR of multi-modality medical images. We derive a graph representation that emphasises the spatial relationships between modalities by structurally constraining the graph based on image features, e.g., spatial proximity of tumours and organs. We also introduce a graph similarity calculation algorithm that prioritises the relationships between tumours and related organs. To enable effective human interpretation of retrieved multi-modality images, we also present a user interface that displays graph abstractions alongside complex multi-modality images. Our results demonstrated that our method achieved a high precision when retrieving images on the basis of tumour location within organs. The evaluation of our proposed UI design by user surveys revealed that it improved the ability of users to interpret and understand the similarity between retrieved PET-CT images. The work in this thesis advances the state-of-the-art by enabling a novel approach for the retrieval of multi-modality medical images.
23

Xu, Lina [Verfasser]. "Analyzing Tumor Lesions in PET/CT Images Using Deep Learning Methods and Physiological Models / Lina Xu." München : Verlag Dr. Hut, 2019. http://d-nb.info/1181514266/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Florea, Ioana. "Pet parametric imaging of acetylcholine esterase activity without arterial blood sampling in normal subjects and patients with neurovegetative disease." Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The development of a method for a reliable quantification of 11C-MP4A PET images without arterial input function at pixel level in order to study acetylcholine esterase activity (AChE) is of clinical interest for the diagnosis of dementia and memory disorders. Two groups of subjects, normal control group (4 subjects - NC group) and Alzheimer disease group (7 subjects - AD group) participated for the study. AChE activity can be quantify by using a reference input function derived from region having a very high metabolism by AChE and a three-rate constant compartmental model. In order to obtain, at pixel level, accurate and precise estimates of model parameters in both low and moderate enzymatic expression regions, a novel method based on the use of the maximum a posteriori probability (MAP) Bayesian estimator has been developed. This method was compared to other approaches already published for quantification of AChE activity: 1) the method based on the use of a linear least squares (RLS) analysis; 2) the RRE method based on a simplification of the model structure; 3) the RRE_BF method which consider a basis function approach for RRE procedure; 4) the method R_NLLS based on a non linear least squares estimator. AChE activity was measured in terms of the rate constant for hydrolysis of 11C-MP4A, k3. Striatum (basal ganglia) was used as reference region based on its very high AChE activity. Parametric images of k3 obtained with MAP from areas with different levels of AChE activity were compared between groups and respect to the k3 estimates obtained with the other mathematical approaches. Despite the small group of subjects, the methods (RLS; RRE, RRE_BF, R_NLLS, MAP,) used to generate k3 parametric image were able to detect a reduction on AChE activity in neocortex of AD patients respect to NC. However, only MAP allows to quantify k3 in region with moderate enzyme expression like thalamus and brainstem. The different performance of the five estimation methods has an impact in the statistical significance of k3 differences. In fact, only the MAP method shows significant differences in thalamus and brainstem that are in good agreement with published study.
25

Rajkumar, Ravichandran [Verfasser], Irene Akademischer Betreuer] Neuner, and N. Jon [Akademischer Betreuer] [Shah. "Simultaneous trimodal MR/PET/EEG imaging : a study of the attenuation effect of EEG caps on PET images and a comparison of EEG microstates with resting state fMRI and FDG-PET measures / Ravichandran Rajkumar ; Irene Neuner, Nadim Joni Shah." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1225401666/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Rajkumar, Ravichandran Verfasser], Irene [Akademischer Betreuer] Neuner, and N. Jon [Akademischer Betreuer] [Shah. "Simultaneous trimodal MR/PET/EEG imaging : a study of the attenuation effect of EEG caps on PET images and a comparison of EEG microstates with resting state fMRI and FDG-PET measures / Ravichandran Rajkumar ; Irene Neuner, Nadim Joni Shah." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1225401666/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tixier, Florent. "Caractérisation de l'hétérogénéité tumorale sur des images issues de la tomographie par émission de positons (TEP)." Phd thesis, Université de Bretagne occidentale - Brest, 2013. http://tel.archives-ouvertes.fr/tel-00991783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le cancer est chaque année responsable de 7,6 millions de décès dans le monde. L'amélioration des traitements constitue donc un enjeu majeur de santé publique. Il a été démontré que l'association d'un diagnostic précoce et d'un traitement efficace était associée à un impact significatif sur la survie des patients. De nombreux facteurs pronostics de la survie ont été identités et sont actuellement utilisés en routine clinique. Ce diagnostic est souvent réalisé en partieà l'aide de l'imagerie de Tomographie par Emission de Positons (TEP), cette dernière s'étantavérée être un outil très performant pour l'identification des tumeurs et métastases dans uncertain nombre de modèles de cancer. La TEP fait partie des modalités d'imagerie fonctionnelleet a donc le potentiel de fournir des informations liées à la biologie sous-jacente des cancers.Toutefois, du fait de sa faible résolution spatiale, elle n'avait que très peu été utilisée avec cet objectif.Ce travail de thèse a consisté à étudier des paramètres quantitatifs pouvant être extraitsde ces images, plus spécifiquement ceux permettant la caractérisation de l'hétérogénéité intratumorale. Nous avons pu identifier un ensemble de paramètres issus de l'analyse de texture quisont reproductibles, robustes aux effets de volume partiel et à la méthode de segmentation, etvraisemblablement liés à la physiologie tumorale. Nous avons également pu mettre en évidencele potentiel de ces paramètres extraits d'images de diagnostic, pour contribuer à la prédiction dela réponse thérapeutique ainsi que comme facteur pronostic. Ces nouveaux indices quantitatifspourraient à relativement court terme venir compléter les facteurs de référence courammentutilisés aujourd'hui en oncologie pour la prise en charge thérapeutique des patients.
28

Andersson, Jonathan. "Methods for automatic analysis of glucose uptake in adipose tissue using quantitative PET/MRI data." Thesis, Uppsala universitet, Enheten för radiologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Brown adipose tissue (BAT) is the main tissue involved in non-shivering heat production. A greater understanding of BAT could possibly lead to new ways of prevention and treatment of obesity and type 2 diabetes. The increasing prevalence of these conditions and the problems they cause society and individuals make the study of the subject important. An ongoing study performed at the Turku University Hospital uses images acquired using PET/MRI with 18F-FDG as the tracer. Scans are performed on sedentary and athlete subjects during normal room temperature and during cold stimulation. Sedentary subjects then undergo scanning during cold stimulation again after a six weeks long exercise training intervention. This degree project used images from this study. The objective of this degree project was to examine methods to automatically and objectively quantify parameters relevant for activation of BAT in combined PET/MRI data. A secondary goal was to create images showing glucose uptake changes in subjects from images taken at different times. Parameters were quantified in adipose tissue directly without registration (image matching), and for neck scans also after registration. Results for the first three subjects who have completed the study are presented. Larger registration errors were encountered near moving organs and in regions with less information. The creation of images showing changes in glucose uptake seem to be working well for the neck scans, and somewhat well for other sub-volumes. These images can be useful for identification of BAT. Examples of these images are shown in the report.
29

Mi, Hongmei. "PDE modeling and feature selection : prediction of tumor evolution and patient outcome in therapeutic follow-up with FDG-PET images." Rouen, 2015. http://www.theses.fr/2015ROUES005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La radiothérapie adaptative peut potentiellement améliorer le résultat du traitement du patient à partir d'un plan de traitement ré-optimisé précoce ou au cours du traitement en prenant en compte les spécificités individuelles. Des études prédictives sur le suivi thérapeutique du patient pourraient être d'intérêt sur la façon d’adapter le traitement à chaque patient. Dans cette thèse, nous menons deux études prédictives en utilisant la tomographie par émission de positons (TEP). La première étude a pour but de prédire l'évolution de la tumeur pendant la radiothérapie. Nous proposons un modèle de croissance tumorale spécifique au patient qui est basé sur des équations aux dérivées partielles. Ce modèle est composé de trois termes représentant trois processus biologiques respectivement, où les paramètres du modèle de croissance tumorale sont estimés à partir des images TEP précédentes du patient. La deuxième partie de la thèse porte sur le cas où des images fréquentes de la tumeur est indisponible. Nous effectuons donc une autre étude dont l'objectif est de sélectionner des caractéristiques prédictives, parmi lesquelles des caractéristiques issues des images TEP et d'autres cliniques, pour prédire l’état du patient après le traitement. Notre deuxième contribution est donc une méthode « wrapper » de sélection de caractéristiques qui recherche vers l'avant dans un espace hiérarchique de sous-ensemble de caractéristiques, et évalue les sous-ensembles par leurs performances de prédiction utilisant la machine à vecteurs de support (SVM) comme le classificateur. Pour les deux études prédictives, des résultats obtenus chez des patients atteints de cancer sont encourageants
Adaptive radiotherapy has the potential to improve patient’s outcome from a re-optimized treatment plan early or during the course of treatment by taking individual specificities into account. Predictive studies in patient’s therapeutic follow-up could be of interest in how to adapt treatment to each individual patient. In this thesis, we conduct two predictive studies using patient’s positron emission tomography (PET) imaging. The first study aims to predict tumor evolution during radiotherapy. We propose a patient-specific tumor growth model derived from the advection-reaction equation composed of three terms representing three biological processes respectively, where the tumor growth model parameters are estimated based on patient’s preceding sequential PET images. The second part of the thesis focuses on the case where frequent imaging of the tumor is not available. We therefore conduct another study whose objective is to select predictive factors, among PET-based and clinical characteristics, for patient’s outcome after treatment. Our second contribution is thus a wrapper feature selection method which searches forward in a hierarchical feature subset space, and evaluates feature subsets by their prediction performance using support vector machine (SVM) as the classifier. For the two predictive studies, promising results are obtained on real-world cancer-patient datasets
30

Millardet, Maël. "Amélioration de la quantification des images TEP à l'yttrium 90." Thesis, Ecole centrale de Nantes, 2022. https://tel.archives-ouvertes.fr/tel-03871632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La popularité de l'imagerie TEP à l'yttrium 90 va grandissante. Cependant, la probabilité qu'une désintégration d'un noyau d'yttrium 90 mène à l'émission d'un positon n'est que de 3,2 × 10-5, et les images reconstruites sont par conséquent caractérisées par un niveau de bruit élevé, ainsi que par un biais positif dans les régions de faible activité. Pour corriger ces problèmes, les méthodes classiques consistent à utiliser des algorithmes pénalisés, ou autorisant des valeurs négatives dans l'image. Cependant, une étude comparant et combinant ces différentes méthodes dans le contexte spécifique de l'yttrium 90 manquait encore à l'appel au début de cette thèse. Cette dernière vise donc à combler ce manque. Malheureusement, les méthodes autorisant les valeurs négatives ne peuvent pas être utilisées directement dans le cadre d'une étude dosimétrique, et cette thèse commence donc par proposer une nouvelle méthode de posttraitement des images, visant à en supprimer les valeurs négatives en en conservant les valeurs moyennes le plus localement possible. Une analyse complète multi-objectifs de ces différentes méthodes est ensuite proposée. Cette thèse se termine en posant les prémices de ce qui pourra devenir un algorithme permettant de proposer un jeu d'hyperparamètres de reconstruction adéquats, à partir des seuls sinogrammes
Yttrium-90 PET imaging is becoming increasingly popular. However, the probability that decay of a yttrium-90 nucleus will lead to the emission of a positron is only 3.2 × 10-5, and the reconstructed images are therefore characterised by a high level of noise, as well as a positive bias in low activity regions. To correct these problems, classical methods use penalised algorithms or allow negative values in the image. However, a study comparing and combining these different methods in the specific context of yttrium-90 was still missing at the beginning of this thesis. This thesis, therefore, aims to fill this gap. Unfortunately, the methods allowing negative values cannot be used directly in a dosimetric study. Therefore, this thesis starts by proposing a new method of post-processing the images, aiming to remove the negative values while keeping the average values as locally as possible. A complete multi-objective analysis of these different methods is then proposed. This thesis ends by laying the foundations of what could become an algorithm providing a set of adequate reconstruction hyper parameters from sinograms alone
31

Call, Daniel M. (Daniel Marcus) 1973. "A spectral analysis method to quantify the relative contribution of different length scales to heterogeneity in PET images of pulmonary function." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hami, Abdoul-Azize Rihab. "Simulation des processus radiobiologiques basés sur l'imagerie pour l'évaluation de schémas thérapeutiques individualisés en radiothérapie." Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La radiothérapie est l'un des principaux traitements du cancer. Malgré son utilisation intensive en pratique clinique, son efficacité dépend de plusieurs facteurs. Plusieurs études ont montré que la réponse tumorale à la radiothérapie diffère d'un patient à l'autre. En effet, la réponse de la tumeur est influencée par plusieurs facteurs comme l'hypoxie et des multiples interactions entre le microenvironnement tumoral et les cellules saines. Cinq concepts biologiques majeurs appelés les « 5 Rs » qui résument ces interactions ont vu le jour. Ces concepts incluent la réoxygénation, la réparation cellulaire, la redistribution cellulaire dans le cycle, la radiosensibilité intrinsèque et la repopulation tumorale. La stratégie de traitement optimale doit tenir compte de ces « 5 Rs ». Dans cette étude, nous avons proposé dans un premier temps une approche de modélisation d'oxygénation qui peut être considérée comme un processus d'optimisation de traitement en absence de données concernant l'oxygène. Nous avons utilisé un modèle multi-échelle afin de prédire les effets de la radiothérapie sur la croissance tumorale en utilisant une base des images de tomographie par émission de positons (PET). Ensuite, nous avons inclus dans notre modèle les «5 Rs » de la radiothérapie, afin de prédire les effets des rayonnements sur la croissance tumorale. Enfin, nous avons présenté une étude sur l'effet de différents types de fractionnement sur la réponse tumorale à la radiothérapie
Radiotherapy is one of the principal cancer treatments. Despite its intensive use in clinical practice, itseffectiveness depends on several factors. Several studies showed that the tumor response to radiotherapy differ from one patient to another. The response of tumor is influenced by several factors like hypoxia and multiple interactions between the tumor microenvironment and healthy cells. Five major biologic concepts called “5 Rs” resume these interactions. These concepts include reoxygenation, DNA damage-repair, cell cycle redistribution, cellular radiosensitivity and cellular repopulation.The optimal treatment strategy must consider these “5 Rs". In this study, we proposed as a first an approach to oxygenation modeling that can be considered as an optimization process in the absence of data concerning oxygen. We used a multi-scale model to predict the effects of radiotherapy on tumor growth based on information extracted from positron-emission tomography (PET) images. Then, we included to our model the ‘’5 Rs’’ of radiotherapy, to predict the effects of radiation on tumor growth. Finally, we presented a study of the effect of different types of fractionations on tumor response to radiotherapy
33

Tomasi, Giampaolo. "Bayesian and population approaches for pixel-wise quantification of positron emission Tomography images: ridge regression and Global-Two-Stage." Doctoral thesis, Università degli studi di Padova, 2007. http://hdl.handle.net/11577/3425164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
PET (Positron Emission Tomography) is a technique in which a radioactive tracer which decays by positron emission is injected into the subject's body. Through a complex instrumentation and sophisticated reconstruction algorithms, it is then possible to compute the distribution of the tracer over time in the area of interest, which is the desired outcome of the measurement. After reconstruction the image is ready for quantitative analysis, necessary to derive the so-called kinetic parameters, which are relevant in that they have a physiological meaning. This analysis may be performed either at ROI level (Region-Of-Interest, an anatomically homogeneous region such as cerebellum or thalamus) or at pixel level. In the latter scenario kinetic parameters are computed separately for each of the hundreds of thousand of pixels of the image, and the so-called parametric images are generated. Pixel-by-pixel analysis has the intrinsic problem due to the high noise level of pixel TACs (Time Activity Curve, i.e. the value of radioactive concentration as a function of time) as this may give rise to unreliable estimates for the kinetic parameters or to non-convergence of the algorithms used for estimation. Parametric maps, however, are of paramount importance as they are characterized by a high spatial resolution: phenomena such as a lesion in a cerebral structure or the presence of a small tumoral mass may be invisible with ROI analysis but detectable even at simple visual inspection through pixel analysis. The aim of this thesis was to develop fast methods for the generation of more reliable parametrci maps. A method already developed in literature, known as ridge regression (RR), was comprehensively studied and developed; in addition, a technique completely new to the field of PET , Global-Two-Stages(GTS), belonging to the field of population approaches , was proposed and tested. The basic ideas of these methodologies which make them part of the family of Bayesian approaches is, loosely speaking, to employ, in the parameter estimation for a given pixel, not only the TAC of that pixel but to incorporate also the information driving from the other pixels in order to obtain a global regularizing effect, penalizing, for instance the noisiest TACs..The analysis was carried out first on simulated data because, in order to be able to compute indices which quantify the goodness of final estimates such BIAS and Root Mean Square Error (RMSE), the knowledge of "true" parameters is necessary, and data are necessarily to be simulated. The performances of the proposed Bayesian algorithms were compared to those of the appropriate "gold standard" , the most used estimation method for the tracer under examination. Interest was then addressed to a real rich dataset of the tracer [11C]PK11195, very used for the study of pathologies such as Alzheimer and Huntington, in that it is linked to the overall level of neuroinflammation. The analysis of simulated data revealed that RR and GTS gave always rise to decrease of RMSE, leving BIAS substantially unchanged.The improvements are clearly dependent on the tracer, nose level, and specific kinetic parameter considered.The study of the [11C]PK11195 dataset showed how RR and GTS much more regular parametric maps with respect to SRTM, the "gold standard" used for comparison. The proposed approaches (RR and GTS) also yielded excellent results in terms of the ability to differentiate between healthy and ill subjects on the basis of the maps of the kinetic parameter BP (Binding Potential): this fact has clearly a significant diagnostic impact as more reliable methods (i.e. with higher sensitivity and specificity) are needed for the daily application in clinical practise. In conclusion, Ridge Regression and Global-Two-Stage are precious instruments for the improvement of parametric maps: both methodologies can be applied with virtually any tracer and model, provided that initial estimates can be computed through standard weighted least squares, and have therefore a wide range of applicability.
34

Jaouen, Vincent. "Traitement des images multicomposantes par EDP : application à l'imagerie TEP dynamique." Thesis, Tours, 2016. http://www.theses.fr/2016TOUR3303/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse présente plusieurs contributions méthodologiques au traitement des images multicomposantes. Nous présentons notre travail dans le contexte applicatif difficile de l’imagerie de tomographie d’émission de positons dynamique (TEPd), une modalité d’imagerie fonctionnelle produisant des images multicomposantes fortement dégradées. Le caractère vectoriel du signal offre des propriétés de redondance et de complémentarité de l’information le long des différentes composantes permettant d’en améliorer le traitement. Notre première contribution exploite cet avantage pour la segmentation robuste de volumes d’intérêt au moyen de modèles déformables. Nous proposons un champ de forces extérieures guidant les modèles déformables vers les contours vectoriels des régions à délimiter. Notre seconde contribution porte sur la restauration de telles images pour faciliter leur traitement ultérieur. Nous proposons une nouvelle méthode de restauration par équations aux dérivées partielles permettant d’augmenter le rapport signal sur bruit d’images dégradées et d’en renforcer la netteté. Appliqués à l’imagerie TEPd, nous montrons l’apport de nos contributions pour un problème ouvert des neurosciences, la quantification non invasive d’un radiotraceur de la neuroinflammation
This thesis presents several methodological contributions to the processing of vector-valued images, with dynamic positron emission tomography imaging (dPET) as its target application. dPET imaging is a functional imaging modality that produces highly degraded images composed of subsequent temporal acquisitions. Vector-valued images often present some level of redundancy or complementarity of information along the channels, allowing the enhancement of processing results. Our first contribution exploits such properties for performing robust segmentation of target volumes with deformable models.We propose a new external force field to guide deformable models toward the vector edges of regions of interest. Our second contribution deals with the restoration of such images to further facilitate their analysis. We propose a new partial differential equation-based approach that enhances the signal to noise ratio of degraded images while sharpening their edges. Applied to dPET imaging, we show to what extent our methodological contributions can help to solve an open problem in neuroscience : noninvasive quantification of neuroinflammation
35

Gaillard, Maxence. "Les images du cerveau : epistémologie de l'usage de l'imagerie cérébrale en sciences cognitives." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse d’épistémologie et d’histoire des sciences cognitives est consacrée à son niveau le plus général au problème de l’instrument scientifique, parent pauvre de la réflexion sur l’investigation scientifique, et se concentre à titre particulier sur le développement des techniques d’imagerie cérébrale fonctionnelle et leur introduction dans le domaine cognitif au cours des années 1980-1990. Un choix motivé notamment par la nouveauté et l’importance de ce nouvel instrument, dont l’émergence est régulièrement comparée à celle du télescope au moment de la Révolution scientifique du XVIIe siècle. La première partie est ainsi consacrée à une analyse générale de l’instrument scientifique et des problèmes essentiels qu’il soulève. Elle propose un certain nombre d’hypothèses en réponse, et en examine les enjeux théoriques. La deuxième partie défend une interprétation historique de l’émergence des deux technologies d’imagerie fonctionnelle que sont la tomographie par émission de positons et l’imagerie par résonance magnétique fonctionnelle. En reprenant dans le détail certains éléments d’invention et de diffusion de ces techniques, elle montre notamment l’intrication des procédures de validation des instruments et des divers mécanismes scientifiques et sociétaux qui poussent à les développer puis à les utiliser. A la lumière des analyses théoriques et générales de la première, et sur la base de l’interprétation historique de la seconde, la troisième partie est dédiée à l’examen des implications de ces nouvelles technologies d’imagerie sur l’évolution du champ des sciences cognitives et de la reprise de leurs résultats dans d’autres domaines, tant scientifiques que technologiques ou pratiques. A ce double égard, elle défend la thèse générale que l’introduction de l’imagerie agit beaucoup moins comme un facteur de résolution de certaines questions que comme un facteur de déplacement de la problématique et de l’impact théorique et sociétal des sciences cognitives
At a general level, this dissertation in philosophy and history of cognitive science is devoted to the underestimated problem of scientific instruments. It is focused on some functional brain imaging techniques introduced in the field of cognitive studies during the 1980’s and 1990’s, the impact of such new technologies being sometimes compared to an instrumental revolution, in a way similar to the impact of the invention of the telescope on post-Galilean astronomy. The first part consists in a philosophical and historical analysis of the notion of scientific instrument. In this regard, some issues are raises and some hypotheses are formulated. The second part presents an interpretation of the historical emergence of Positron Emission Tomography and functional Magnetic Resonance Imaging. Dealing with details of the invention and circulation of those techniques, it shows in particular the entanglement of the validation procedures of instruments with the various scientific and societal mechanisms driving to their development and use. Taking its roots in the general analysis of the first part and the historical interpretation of the second part, the third part looks into the impact of the new functional brain imaging technologies on the evolution of cognitive science and the diffusion of its results in other domains. Concerning both cognitive science and larger aspects, it is argued that brain imaging is less a factor of resolution of specific questions than a factor of shifting in the problematics and the theoretical and societal significance of cognitive science
36

Zbib, Hiba. "Segmentation d'images TEP dynamiques par classification spectrale automatique et déterministe." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR3317/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La quantification d’images TEP dynamiques est un outil performant pour l’étude in vivo de la fonctionnalité des tissus. Cependant, cette quantification nécessite une définition des régions d’intérêts pour l’extraction des courbes temps-activité. Ces régions sont généralement identifiées manuellement par un opérateur expert, ce qui renforce leur subjectivité. En conséquent, un intérêt croissant a été porté sur le développement de méthodes de classification. Ces méthodes visent à séparer l’image TEP en des régions fonctionnelles en se basant sur les profils temporels des voxels. Dans cette thèse, une méthode de classification spectrale des profils temporels des voxels est développée. Elle est caractérisée par son pouvoir de séparer des classes non linéaires. La méthode est ensuite étendue afin de la rendre utilisable en routine clinique. Premièrement une procédure de recherche globale est utilisée pour localiser d’une façon déterministe les centres optimaux des données projetées. Deuxièmement, un critère non supervisé de qualité de segmentation est proposé puis optimisé par le recuit simulé pour estimer automatiquement le paramètre d’échelle et les poids temporels associés à la méthode. La méthode de classification spectrale automatique et déterministe proposée est validée sur des images simulées et réelles et comparée à deux autres méthodes de segmentation de la littérature. Elle a présenté une amélioration de la définition des régions et elle paraît un outil prometteur pouvant être appliqué avant toute tâche de quantification ou d’estimation de la fonction d’entrée artérielle
Quantification of dynamic PET images is a powerful tool for the in vivo study of the functionality of tissues. However, this quantification requires the definition of regions of interest for extracting the time activity curves. These regions are usually identified manually by an expert operator, which reinforces their subjectivity. As a result, there is a growing interest in the development of clustering methods that aim to separate the dynamic PET sequence into functional regions based on the temporal profiles of voxels. In this thesis, a spectral clustering method of the temporal profiles of voxels that has the advantage of handling nonlinear clusters is developed. The method is extended to make it more suited for clinical applications. First, a global search procedure is used to locate in a deterministic way the optimal cluster centroids from the projected data. Second an unsupervised clustering criterion is proposed and optimised by the simulated annealing to automatically estimate the scale parameter and the weighting factors involved in the method. The proposed automatic and deterministic spectral clustering method is validated on simulated and real images and compared to two other segmentation methods from the literature. It improves the ROI definition, and appears as a promising pre-processing tool before ROI-based quantification and input function estimation tasks
37

Bieth, Marie Verfasser], Bjoern Holger [Akademischer Betreuer] [Gutachter] [Menze, and Markus [Gutachter] Schwaiger. "Localising Anatomical Structures and Quantifying Tumour Burden in PET/CT Images using Machine Learning / Marie Bieth ; Gutachter: Björn Menze, Markus Schwaiger ; Betreuer: Björn Menze." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1147968209/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Roman, Jimenez Geoffrey. "Analyse des images de tomographie par émission de positons pour la prédiction de récidive du cancer du col de l'utérus." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S037/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ces travaux de thèse s'inscrivent dans le contexte de la prédiction de la récidive en radiothérapie du cancer de l'utérus. L'objectif était d'analyser les images de tomographie par émission de positons (TEP) au 18F-fluorodésoxyglucose (18F-FDG) en vue d'en extraire des paramètres quantitatifs statistiquement corrélés aux événements de récidive. Six études ont été réalisées afin de répondre aux différentes problématiques soulevées par l'analyse des images 18F-FDG TEP telles que la présence d'artefact, l'isolation du métabolisme tumoral ou l'évaluation du signal en cours de traitement. Les études statistiques ont porté sur l'analyse de paramètres reflétant l'intensité, la forme et la texture du métabolisme tumoral avant, et en cours de traitement. À l'issue de ces travaux, le volume métabolique tumoral pré-thérapeutique ainsi que la glycolyse totale de la lésion per-thérapeutique apparaissent comme les paramètres les plus prometteurs pour la prédiction de récidive de cancers du col de l'utérus. De plus, il apparaît que la combinaison de ces paramètres avec d'autres caractéristiques de texture ou de forme, à l'aide de modèles statistiques d'apprentissage supervisé ou de modèles de régression plus classiques, ont permis d'augmenter la prédiction des événements de récidive
This thesis deals with the issue of predicting the recurrence within the context of cervical cancer radiotherapy. The objective was to analyze positron emission tomography (PET) with 18F-fluorodeoxyglucose (18F-FDG) to extract quantitative parameters that could show statistical correlation with tumor recurrence. Six study were performed to address 18F-FDG PET imaging issues such as the presence of bladder uptake artifacts, tumor segmentation impact, as well as the analysis of tumor evolution along the treatment. Statistical analyses were performed among parameters reflecting intensity, shape and texture of the tumor metabolism before, and during treatment. Results show that the pre-treatment metabolic tumor volume and the per-treatment total lesion glycolysis are the most promising parameters for cervical cancer recurrence prediction. In addition, combinations of these parameters with shape descriptors and texture features, using machine-learning methods or regression models, are able to increase the prediction capability
39

Zheng, Yiran. "CT-PET Image Fusion and PET Image Segmentation for Radiation Therapy." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1283542509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Garali, Imène. "Aide au diagnostic de la maladie d’Alzheimer par des techniques de sélection d’attributs pertinents dans des images cérébrales fonctionnelles obtenues par tomographie par émission de positons au 18FDG." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4364/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans le cadre de cette thèse, nous nous sommes intéressés à l’étude de l’apport d’une aide assistée par ordinateur au diagnostic de certaines maladies dégénératives du cerveau, en explorant les images de tomographie par émission de positons, par des techniques de traitement d’image et d’analyse statistique.Nous nous sommes intéressés à la représentation corticale des 116 régions anatomiques, en associant à chacune d’elles un vecteur d’attribut issu du calcul des 4 premiers moments des intensités de voxels, et en y incluant par ailleurs l’entropie. Sur la base de l’aire de courbes ROC, nous avons établi qualitativement la pertinence de chacune des régions anatomiques, en fonction du nombre de paramètres du vecteur d’attribut qui lui était associé, pour séparer le groupe des sujets sains de celui des sujets atteints de la maladie d’Alzheimer. Dans notre étude nous avons proposé une nouvelle approche de sélection de régions les plus pertinentes, nommée "combination matrix", en se basant sur un système combinatoire. Chaque région est caractérisée par les différentes combinaisons de son vecteur d’attribut. L’introduction des régions les plus pertinentes(en terme de pouvoir de séparation des sujets) dans le classificateur supervisé SVM nous a permis d’obtenir, malgré la réduction de dimension opérée, un taux de classification meilleur que celui obtenu en utilisant l’ensemble des régions
Our research focuses on presenting a novel computer-aided diagnosis technique for brain Positrons Emission Tomography (PET) images. It processes and analyzes quantitatively these images, in order to better characterize and extract meaningful information for medical diagnosis. Our contribution is to present a new method of classifying brain 18 FDG PET images. Brain images are first segmented into 116 Regions Of Interest (ROI) using an atlas. After computing some statistical features (mean, standarddeviation, skewness, kurtosis and entropy) on these regions’ histogram, we defined a Separation Power Factor (SPF) associated to each region. This factor quantifies the ability of each region to separate neurodegenerative diseases like Alzheimer disease from Healthy Control (HC) brain images. A novel region-based approach is developed to classify brain 18FDG-PET images. The motivation of this work is to identify the best regional features for separating HC from AD patients, in order to reduce the number of features required to achieve an acceptable classification result while reducing computational time required for the classification task
41

Dey, Sounak. "Mapping between Images and Conceptual Spaces: Sketch-based Image Retrieval." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
El diluvi de contingut visual a Internet –de contingut generat per l’usuari a col·leccions d’imatges comercials- motiva nous mètodes intuïtius per cercar contingut d’imatges digitals: com podem trobar determinades imatges en una base de dades de milions? La recuperació d’imatges basada en esbossos (SBIR) és un tema de recerca emergent en què es pot utilitzar un dibuix a mà lliure per consultar visualment imatges fotogràfiques. SBIR s’alinea a les tendències emergents de consum de contingut visual en dispositius mòbils basats en pantalla tàctil, per a les quals les interaccions gestuals com el croquis són una alternativa natural a l’entrada textual. Aquesta tesi presenta diverses contribucions a la literatura de SBIR. En primer lloc, proposem un marc d’aprenentatge entre modalitats que mapi tant esbossos com text en un espai d’inserció conjunta invariant a l’estil representatiu, conservant la semàntica. L’incrustació resultant permet la comparació directa i la cerca entre esbossos / text i imatges i es basa en una xarxa neuronal convolutional multi-branca (CNN) formada mitjançant esquemes d’entrenament únics. S’ha demostrat que l’incorporació profundament obtinguda ofereix un rendiment de recuperació d’última generació en diversos punts de referència SBIR. En segon lloc, proposem un enfocament per a la recuperació d’imatges multimodals en imatges amb etiquetes múltiples. Es formula una arquitectura de xarxa profunda multi-modal per modelar conjuntament esbossos i text com a modalitats de consulta d’entrada en un espai d’inscripció comú, que s’alinea encara més amb l’espai de funcions d’imatge. La nostra arquitectura també es basa en una detecció d’objectes destacables mitjançant un model d’atenció visual basat en LSTM supervisat, obtingut de funcions convolutives. Tant l’alineació entre les consultes com la imatge i la supervisió de l’atenció a les imatges s’obté generalitzant l’algoritme hongarès mitjançant diferents funcions de pèrdua. Això permet codificar les funcions basades en l’objecte i la seva alineació amb la consulta independentment de la disponibilitat de la coincidència de diferents objectes del conjunt d’entrenament. Validem el rendiment del nostre enfocament en conjunts de dades d’un sol objecte o amb diversos objectes, mostrant el rendiment més modern en tots els conjunts de dades SBIR. En tercer lloc, investiguem el problema de la recuperació d’imatges basada en esbossos de zero (ZS-SBIR), on els esbossos humans s’utilitzen com a consultes per a la recuperació de fotografies de categories no vistes. Avancem de forma important les arts prèvies proposant un nou escenari ZS-SBIR que representi un pas endavant en la seva aplicació pràctica. El nou entorn reconeix exclusivament dos importants reptes importants, però sovint descuidats, de la pràctica ZS-SBIR, (i) la gran bretxa de domini entre el dibuix i la fotografia aficionats, i (ii) la necessitat d’avançar cap a una recuperació a gran escala. Primer cop aportem a la comunitat un nou conjunt de dades ZS-SBIR, QuickDraw-Extended, que consisteix en esbossos de 330.000 dòlars i 204.000 dòlars de fotos en 110 categories. Esbossos humans amateurs altament abstractes s’obtenen intencionadament per maximitzar la bretxa de domini, en lloc dels inclosos en conjunts de dades existents que sovint poden ser semi-fotorealistes. A continuació, formulem un marc ZS-SBIR per modelar conjuntament esbossos i fotografies en un espai d’inserció comú. Una nova estratègia per extreure la informació mútua entre dominis està dissenyada específicament per pal·liar la bretxa de domini.
El diluvio de contenido visual en Internet, desde contenido generado por el usuario hasta colecciones de imágenes comerciales, motiva nuevos métodos intuitivos para buscar contenido de imágenes digitales: ¿cómo podemos encontrar ciertas imágenes en una base de datos de millones? La recuperación de imágenes basada en bocetos (SBIR) es un tema de investigación emergente en el que se puede usar un dibujo a mano libre para consultar visualmente imágenes fotográficas. SBIR está alineado con las tendencias emergentes para el consumo de contenido visual en dispositivos móviles con pantalla táctil, para los cuales las interacciones gestuales como el boceto son una alternativa natural a la entrada de texto. Esta tesis presenta varias contribuciones a la literatura de SBIR. En primer lugar, proponemos un marco de aprendizaje multimodal que mapea tanto los bocetos como el texto en un espacio de incrustación conjunto invariante al estilo representativo, al tiempo que conserva la semántica. La incrustación resultante permite la comparación directa y la búsqueda entre bocetos / texto e imágenes y se basa en una red neuronal convolucional de múltiples ramas (CNN) entrenada utilizando esquemas de entrenamiento únicos. La incrustación profundamente aprendida muestra un rendimiento de recuperación de última generación en varios puntos de referencia SBIR. En segundo lugar, proponemos un enfoque para la recuperación de imágenes multimodales en imágenes con etiquetas múltiples. Una arquitectura de red profunda multimodal está formulada para modelar conjuntamente bocetos y texto como modalidades de consulta de entrada en un espacio de incrustación común, que luego se alinea aún más con el espacio de características de la imagen. Nuestra arquitectura también se basa en una detección de objetos sobresalientes a través de un modelo de atención visual supervisado basado en LSTM aprendido de las características convolucionales. Tanto la alineación entre las consultas y la imagen como la supervisión de la atención en las imágenes se obtienen generalizando el algoritmo húngaro utilizando diferentes funciones de pérdida. Esto permite codificar las características basadas en objetos y su alineación con la consulta independientemente de la disponibilidad de la concurrencia de diferentes objetos en el conjunto de entrenamiento. Validamos el rendimiento de nuestro enfoque en conjuntos de datos estándar de objeto único / múltiple, mostrando el rendimiento más avanzado en cada conjunto de datos SBIR. En tercer lugar, investigamos el problema de la recuperación de imágenes basadas en bocetos de disparo cero (ZS-SBIR), donde los bocetos humanos se utilizan como consultas para llevar a cabo la recuperación de fotos de categorías invisibles. Avanzamos de manera importante en las técnicas anteriores al proponer un nuevo escenario ZS-SBIR que representa un firme paso adelante en su aplicación práctica. El nuevo entorno reconoce de manera única dos desafíos importantes pero a menudo descuidados de la práctica ZS-SBIR, (i) la gran brecha de dominio entre el boceto aficionado y la foto, y (ii) la necesidad de avanzar hacia la recuperación a gran escala. Primero contribuimos a la comunidad con un nuevo conjunto de datos ZS-SBIR, QuickDraw -Extended, que consta de bocetos de $ 330,000 $ y fotos de $ 204,000 $ que abarcan 110 categorías. Los bocetos humanos aficionados altamente abstractos se obtienen a propósito para maximizar la brecha de dominio, en lugar de los incluidos en los conjuntos de datos existentes que a menudo pueden ser semi-fotorrealistas. Luego formulamos un marco ZS-SBIR para modelar conjuntamente bocetos y fotos en un espacio de incrustación común.
The deluge of visual content on the Internet – from user-generated content to commercial image collections - motivates intuitive new methods for searching digital image content: how can we find certain images in a database of millions? Sketch-based image retrieval (SBIR) is an emerging research topic in which a free-hand drawing can be used to visually query photographic images. SBIR is aligned to emerging trends for visual content consumption on mobile touch-screen based devices, for which gestural interactions such as sketch are a natural alternative to textual input. This thesis presents several contributions to the literature of SBIR. First, we propose a cross-modal learning framework that maps both sketches and text into a joint embedding space invariant to depictive style, while preserving semantics. The resulting embedding enables direct comparison and search between sketches/text and images and is based upon a multi-branch convolutional neural network (CNN) trained using unique training schemes. The deeply learned embedding is shown to yield state-of-art retrieval performance on several SBIR benchmarks. Second, we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sket-ches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model lear-ned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every SBIR dataset. Third, we investigate the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended, that consists of $330,000$ sketches and $204,000$ photos spanning across 110 categories. Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic. We then formulate a ZS-SBIR framework to jointly model sketches and photos into a common embedding space. A novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. External semantic knowledge is further embedded to aid semantic transfer. We show that, rather surprisingly, retrieval performance significantly outperforms that of state-of-the-art on existing datasets that can already be achieved using a reduced version of our model. We further demonstrate the superior performance of our full model by comparing with a number of alternatives on the newly proposed dataset.
42

Gu, Wei Q. "Automated tracer-independent MRI/PET image registration." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29596.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Giovagnoli, Debora. "Image reconstruction for three-gamma PET imaging." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans cette thèse nous présentons l’imagerie trois gammas, où le système d’acquisition repose sur un émetteur bêta+ et gamma. La justification de l’imagerie 3-gamma est que les informations du détection du troisième gamma peuvent aider à fournir une meilleure localisation du point d’annihilation, permettant ainsi une meilleure qualité d’image et moins de dose délivrée au patient. Nous vous présentons le systéme 3-gamma XEMIS2, développé à Subatech, Nantes, qui est un détecteur basé sur Liquid Xenon, adapté à l’imagerie3-gamma grâce à son stopping power, ses caractéristiques de scintillation et sa géométrie continue. Le principe de la reconstruction d’image 3-gamma est basé sur l’intersection d’une LOR, obtenue à partir des photons de coïncidence, avec un cône Compton, déterminé par le troisième gamma. L’idée est de trouver l’intersection du cône et de la LOR et de l’utiliser pour localiser la position d’annihilation la plus probable sur la ligne, comme pour la différence en temps d’arrivé en TOF-PET. Nous présentons une étude de simulation GATE de deux phantoms (NEMA et Digimouse) pour évaluer les améliorations de la reconstruction d’image 3-gamma par rapport à la TEP conventionnelle, et nous étudions aussi la correction du range du positon, qui est important pour notre émetteur Sc44
In this thesis we present three-gamma imaging, where the acquisition system relies on a beta+ and gamma emitter. The rationale of 3-gamma imaging is that the third gamma detection information may help to provide better localization of the annihilation point, thus enabling higher image quality and fewer dose delivered to the patient. We present the 3-gamma system, theXEMIS2, developed at Subatech, Nantes, that is a LiquidXenon detector suitable for 3-gamma imaging due to its stopping power, its scintillation characteristics and its continuous geometry. The principle of 3-gamma image reconstruction is based on the intersection of a LOR, obtained from the coincidence photons, with a Compton cone, determined by the third gamma. The idea is to find the LOR\cone intersection and use it to locate the most probable annihilation position on the line,as for the time difference in TOF-PET. We present a complete GATE simulation study of two phantoms (similar-NEMA and Digimouse), to assess the improvements of 3-gamma image reconstruction over conventional PET and we study the positron range correction, which is important for our beta+gamma emitter, Sc44
44

Lee, Ki Sung. "Pragmatic image reconstruction for high resolution PET scanners /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/5967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Muñoz, Pujol Xavier 1976. "Image segmentation integrating colour, texture and boundary information." Doctoral thesis, Universitat de Girona, 2003. http://hdl.handle.net/10803/7719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La tesis se centra en la Visión por Computador y, más concretamente, en la segmentación de imágenes, la cual es una de las etapas básicas en el análisis de imágenes y consiste en la división de la imagen en un conjunto de regiones visualmente distintas y uniformes considerando su intensidad, color o textura.
Se propone una estrategia basada en el uso complementario de la información de región y de frontera durante el proceso de segmentación, integración que permite paliar algunos de los problemas básicos de la segmentación tradicional. La información de frontera permite inicialmente identificar el número de regiones presentes en la imagen y colocar en el interior de cada una de ellas una semilla, con el objetivo de modelar estadísticamente las características de las regiones y definir de esta forma la información de región. Esta información, conjuntamente con la información de frontera, es utilizada en la definición de una función de energía que expresa las propiedades requeridas a la segmentación deseada: uniformidad en el interior de las regiones y contraste con las regiones vecinas en los límites. Un conjunto de regiones activas inician entonces su crecimiento, compitiendo por los píxeles de la imagen, con el objetivo de optimizar la función de energía o, en otras palabras, encontrar la segmentación que mejor se adecua a los requerimientos exprsados en dicha función. Finalmente, todo esta proceso ha sido considerado en una estructura piramidal, lo que nos permite refinar progresivamente el resultado de la segmentación y mejorar su coste computacional.
La estrategia ha sido extendida al problema de segmentación de texturas, lo que implica algunas consideraciones básicas como el modelaje de las regiones a partir de un conjunto de características de textura y la extracción de la información de frontera cuando la textura es presente en la imagen.
Finalmente, se ha llevado a cabo la extensión a la segmentación de imágenes teniendo en cuenta las propiedades de color y textura. En este sentido, el uso conjunto de técnicas no-paramétricas de estimación de la función de densidad para la descripción del color, y de características textuales basadas en la matriz de co-ocurrencia, ha sido propuesto para modelar adecuadamente y de forma completa las regiones de la imagen.
La propuesta ha sido evaluada de forma objetiva y comparada con distintas técnicas de integración utilizando imágenes sintéticas. Además, se han incluido experimentos con imágenes reales con resultados muy positivos.
Image segmentation is an important research area in computer vision and many segmentation methods have been proposed. However, elemental segmentation techniques based on boundary or region approaches often fail to produce accurate segmentation results. Hence, in the last few years, there has been a tendency towards the integration of both techniques in order to improve the results by taking into account the complementary nature of such information. This thesis proposes a solution to the image segmentation integrating region and boundary information. Moreover, the method is extended to texture and colour texture segmentation.
An exhaustive analysis of image segmentation techniques which integrate region and boundary information is carried out. Main strategies to perform the integration are identified and a classification of these approaches is proposed. Thus, the most relevant proposals are assorted and grouped in their corresponding approach. Moreover, characteristics of these strategies as well as the general lack of attention that is given to the texture is noted. The discussion of these aspects has been the origin of all the work evolved in this thesis, giving rise to two basic conclusions: first, the possibility of fusing several approaches to the integration of both information sources, and second, the necessity of a specific treatment for textured images.
Next, an unsupervised segmentation strategy which integrates region and boundary information and incorporates three different approaches identified in the previous review is proposed. Specifically, the proposed image segmentation method combines the guidance of seed placement, the control of decision criterion and the boundary refinement approaches. The method is composed by two basic stages: initialisation and segmentation. Thus, in the first stage, the main contours of the image are used to identify the different regions present in the image and to adequately place a seed for each one in order to statistically model the region. Then, the segmentation stage is performed based on the active region model which allows us to take region and boundary information into account in order to segment the whole image. Specifically, regions start to shrink and expand guided by the optimisation of an energy function that ensures homogeneity properties inside regions and the presence of real edges at boundaries. Furthermore, with the aim of imitating the Human Vision System when a person is slowly approaching to a distant object, a pyramidal structure is considered. Hence, the method has been designed on a pyramidal representation which allows us to refine the region boundaries from a coarse to a fine resolution, and ensuring noise robustness as well as computation efficiency.
The proposed segmentation strategy is then adapted to solve the problem of texture and colour texture segmentation. First, the proposed strategy is extended to texture segmentation which involves some considerations as the region modelling and the extraction of texture boundary information. Next, a method to integrate colour and textural properties is proposed, which is based on the use of texture descriptors and the estimation of colour behaviour by using non-parametric techniques of density estimation. Hence, the proposed strategy of segmentation is considered for the segmentation taking both colour and textural properties into account.
Finally, the proposal of image segmentation strategy is objectively evaluated and then compared with some other relevant algorithms corresponding to the different strategies of region and boundary integration. Moreover, an evaluation of the segmentation results obtained on colour texture segmentation is performed. Furthermore, results on a wide set of real images are shown and discussed.
46

Caresia, Aróztegui Ana Paula. "PET/TC en el cáncer de ovario: Estadificación inicial, valoración de la resecabilidad primaria y la respuesta a la quimioterapia neoadyuvante." Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/403771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
El cáncer de ovario es una neoplasia de mal pronóstico, dado que habitualmente se diagnostica localmente avanzada. La PET/TC es una técnica de imagen híbrida ampliamente utilizada en oncología, pero que en el cáncer de ovario, únicamente está claramente recomendada por las principales guías de actuación internacional (National Comprehensive Cancer Network, European Society of medical oncology, Sociedad española de ginecología y obstetricia) en el caso de recidiva de la enfermedad. Recientemente la Society of Gynecologic Oncology y la National Comprehensive Cancer Network, abren la posibilidad a utilizar la PET/TC de forma opcional en la estadificación inicial de pacientes con cáncer de ovario localmente avanzado. En este estudio se ha comparado la PET/TC con la TC (técnica convencional) en la estadificación inicial de la neoplasia de ovario. Se ha demostrado que la PET/TC es capaz de detectar mayor porcentaje de pacientes con metástasis respecto a la TC (40.74% respecto a 11.11%). La localización más frecuente de las metástasis a distancia por PET/TC son las adenopatías supradiafragmáticas, seguido de la afectación pleural. La PET/TC cambia el estadio FIGO respecto al TC inicial en un 59.25%% de pacientes a expensas fundamentalmente de encontrar más enfermedad peritoneal o a distancia. Globalmente, la implementación de la FDG-PET/TC en nuestro estudio modificó la intención terapéutica respecto a la intención inicial en el 25.9% de los casos. También se ha valorado la resecabilidad primaria de pacientes con cáncer de ovario con la PET/TC abdominal y la TC abdominal, ambas respecto a la laparoscopia diagnóstica. Se ha comprobado que los hallazgos abdominales de la PET/TC se correlacionan mejor con la laparoscopia diagnóstica que la TC (K = 0.684 para la PET/TC comparado con K=0.419 para la TC). La PET/TC fue concordante con la laparoscopia en el 85.18% de los pacientes mientras que la TC en un 70.4% de los pacientes. Los casos discordantes entre la PET/TC y la laparoscopia en términos de resecabilidad eran pacientes con enfermedad diseminada extraabdominal detectada por PET/TC o por carcinomatosis miliar detectada por laparoscopia e infravalorada por la PET/TC. En un grupo de pacientes con neoplasia de ovario localmente avanzada (no resecables de forma primaria, FIGO III o IV) se ha estudiado la respuesta a la QTNA mediante PET/TC y se ha comparado con el método del Gynecologic Cancer InterGroup. La disminución del SUVmáx del global del estudio (∆SUVGlobal) y del tumor primario (∆SUVprimario), son variables predictoras de la sensibilidad al platino, pero no de la resecabilidad en el intervalo ni de la respuesta histopatológica. Con una disminución del 69.78% del SUVGlobal y del 61.87% del SUVprimario se consigue clasificar correctamente las pacientes en respondedoras/no respondedoras al platino, con eficiencias diagnósticas del 76.62% para el ∆SUVGlobal y del 73.07% para ∆SUVprimario, superiores a las obtenidas con el método Gynecologic Cancer InterGroup (50%).
Ovarian cancer has a poor prognosis, because the majority of patients have advanced disease at the time of diagnosis. Although 18F-FDG PET/CT is widely used in oncology, the guidelines for ovarian cancer (National Comprehensive Cancer Network, European Society of Medical Oncology and Sociedad Española de Ginecología y Obstetricia) clearly indicating 18F-FDG PET/CT in recurrence. The Society of Gynecologic Oncology and the National Comprehensive Cancer Network recently included PET/CT as an option for the initial staging of locally advanced disease. We compared PET/CT and CT in the initial staging of ovarian cancer. PET/CT detected distant metastases in more patients than CT (40.74% vs. 11.11%). The most frequent locations of distant metastases detected by PET/CT were the supradiaphragmatic lymph nodes and pleura. PET/CT changed FIGO staging compared with CT in 59.25% of cases, especially due to unsuspected peritoneal metastasis or distant metastases. However, PET/CT changed treatment management in only 25.9% patients. We also compared abdominal PET/CT and abdominal CT in the assessment of resectability, with respect to laparoscopic valuation (gold standard). Abdominal PET/CT correlated better than abdominal CT with diagnostic laparoscopy findings in terms of resectability (K=0.684 for PET/CT compared to K=0.419 for CT alone). Abdominal PET/CT findings were concordant with surgical stage in 85.18% of patients and abdominal CT was concordant in 70.4% of patients. Discrepancies between PET/CT and laparoscopic findings were explained by extra-abdominal disease detected by PET/CT or miliary peritoneal metastases detected by laparoscopy and seen less clearly by PET/CT. In a group of patients with locally advanced ovarian cancer (FIGO IIIC or IV). We compared PET/CT and Gynecologic Cancer InterGroup method in evaluation response of neoadjuvant chemotherapy in patients with primary unresectable locally advanced ovarian cancer. The change in the SUVmax of the primary tumor (∆SUVprimary) and the overall change in SUVmax in the study (∆SUVGlobal) predicted platinum sensitivity but not interval resectability or histopathological response. Decreases in SUVmax (∆SUVGlobal ≥69.78% (accuracy=76.62%) or ∆SUVprimary ≥61.87% (accuracy=73.07%)) identified platinum responders or non-responders better than the Gynecologic Cancer InterGroup method (accuracy=50%).
47

Williamitis, Joseph M. "Using fMRI BOLD Imaging to Motion-Correct Associated, Simultaneously Imaged PET Data." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1620585748146734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Riba, Fiérrez Pau. "Distilling Structure from Imagery: Graph-based Models for the Interpretation of Document Images." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/670774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Des del seu inici, la comunitat investigadora sobre reconeixement de patrons i visió per computador ha reconegut la importància d’aprofitar la informació estructural de les imatges. Els grafs s’han seleccionat com el marc adequat per representar aquest tipus d’informació a causa de la seva flexibilitat i poder de representació capaç de codificar, tant els components, objectes i entitats com les seves relacions. Tot i que els grafs s’han aplicat amb èxit a una gran varietat de tasques -com a resultat de la seva naturalesa simbòlica i relacional- sempre han patit d’algunes limitacions comparats amb mètodes estadístics. Això es deu al fet que algunes operacions matemàtiques trivials no tenen una equivalència en el domini dels grafs. Per exemple, en la base de moltes aplicacions de reconeixement de patrons hi ha la necessitat de comparar objectes. No obstant això, aquesta operació trivial no està degudament definida per grafs quan considerem vectors de característiques. Al llarg d’aquesta recerca, el principal domini d’aplicació està basat en el tema de l’Anàlisi i Reconeixement d’Imatges de Documents. Aquest és un subcamp de la Visió per Computador que té com a objectiu compendre imatges de documents. En aquest context, l’estructura -particularment la representació en forma de graf- proporciona una dimensió complementària al contingut de la imatge. En Visió per Computador la primera dificultat que ens trobem recau en construir una representació significativa de grafs capaç de codificar les característiques rellevants d’una imatge donada. Això es deu al fet que és un procés que ha de trobar un equilibri entre la simplicitat de la representació i la flexibilitat, per tal de representar les diferents deformacions que apareixen en cada domini d’aplicació. Hem estudiat aquest tema en l’aplicació de la recerca de paraules, dividint els diferents traços en grafemes –les unitats més petites d’un alfabet manuscrit&-. També, hem investigat diferents metodologies per accelerar el procés de comparació entre grafs perquè la recerca de paraules o, inclús, de forma més general, l’aplicació en la recerca de grafs, pugui incloure grans col·leccions de documents. Aquestes metodologies han estat principalment dues: (a) un sistema d’indexació de grafs combinat amb un sistema de votació en l’àmbit de nodes capaç d’eliminar resultats improbables i (b) usant representacions jeràrquiques de grafs que duen a terme la majoria de les comparacions en una versió reduïda del graf original, mitjançant comparatives entre els nivells més abstractes i els més detallats. A més a més, la representació jeràrquica també ha demostrat obtenir una representació més robusta que el graf original, lidiant amb el soroll i les deformacions de manera elegant. Per tant, proposem explotar aquesta informació en forma de codificació jeràrquica del graf que permeti utilitzar tècniques estadístiques clàssiques. Els nous avenços en aprenentatge profund geomètric han aparegut com una generalització de les metodologies d’aprenentatge profund aplicades a dominis no Euclidians –com grafs i varietats–, i han promogut un gran interès en la comunitat científica per aquests esquemes de representació. Així doncs, proposem una distància de grafs capaç d’obtenir resultats comparables a l’estat de l’art en diferents tasques aprofitant aquests nous desenvolupaments, però considerant les metodologies tradicionals com a base. També hem realitzat una col·laboració industrial amb la finalitat d’extreure informació automàtica de les factures de l’empresa (amb dades anònimes). El resultat ha estat el desenvolupament d’un sistema de detecció de taules en documents administratius. D’aquesta manera les xarxes neuronals basades en grafs han demostrat ser aptes per detectar patrons repetitius, els quals, després d’un procés d’agregació, constitueixen una taula.
La comunidad que investiga el reconocimiento de patrones y la visión por computador ha reconocido la importancia de aprovechar la información estructural de las imágenes. Los grafos se han seleccionado como el marco adecuado para representar este tipo de información a causa de su flexibilidad y poder de representación capaz de codificar los componentes, los objetos, las entidades y sus relaciones. Aunque los grafos se han aplicado con éxito a una gran variedad de tareas –como resultado de su naturaleza simbólica y relacional–, siempre han sufrido algunas limitaciones comparados con los métodos estadísticos. Esto se debe al hecho que algunas operaciones matemáticas triviales no tienen una equivalencia en el dominio de los grafos. Por ejemplo, en la base de la mayoría de aplicaciones de reconocimiento de patrones hay la necesidad de comparar objetos. No obstante, esta operación trivial no está debidamente definida por grafos cuando consideramos vectores de características. Durante la investigación, el principal dominio de aplicación se basa en el Análisis y Reconocimiento de Imágenes de Documentos. Este es un subcampo de la Visión por Computador que tiene como objetivo comprender imágenes de documentos. En este contexto la estructura -particularmente la representación en forma de grafo- proporciona una dimensión complementaria al contenido de la imágen. En Visión por Computador la primera dificultad que nos encontramos se basa en construir una representación significativa de grafos que sea capaz de codificar las características relevantes de una imagen. Esto se debe a que es un proceso que tiene que encontrar un equilibrio entre la simplicidad de la representación y la flexibilidad, para representar las diferentes deformaciones que aparecen en cada dominio de la aplicación. Hemos estudiado este tema en la aplicación de la búsqueda de palabras, dividiendo los diferentes trazos en grafemas –las unidades más pequeñas de un alfabeto manuscrito–. Tambien, hemos investigado diferentes metodologías para acelerar el proceso de comparación entre grafos para que la búsqueda de palabras o, incluso, de forma más general, la aplicación de búsqueda de grafos, pueda incluir grandes colecciones de documentos. Estas metodologías han estado principalmente dos: (a) un sistema de indexación de grafos combinado con un sistema de votación en el ámbito de los nodos capaces de eliminar resultados improbables y (b) usando representaciones jerárquicas de grafos que llevan a término la mayoría de las comparaciones en una versión reducida del grafo original mediante comparativas entre los niveles más abstractos y los más detallados. Asimismo, la representación jerárquica también ha demostrado obtener una representación más robusta que el grafo original, además de lidiar con el ruido y las deformaciones de manera elegante. Así pues, proponemos explotar esta información en forma de codificación jerárquica del grafo que permita utilizar técnicas estadísticas clásicas. Los nuevos avances en el aprendizaje profundo geométrico han aparecido como una generalización de las metodologías de aprendizaje profundo aplicadas a dominios no Euclidianos –como grafos y variedades– y han promovido un gran interés en la comunidad científica por estos esquemas de representación. Proponemos una distancia de grafos capaz de obtener resultados comparables al estado del arte en diferentes tareas aprovechando estos nuevos desarrollos, pero considerando las metodologías tradicionales como base. También hemos realizado una colaboración industrial con la finalidad de extraer información automática de las facturas de la empresa (con datos anónimos). El resultado ha sido el desarrollo de un sistema de detección de tablas en documentos administrativos. Así pues, las redes neuronales basadas en grafos han demostrado ser aptas para detectar patrones repetitivos, los cuales, después de un proceso de agregación, constituyen una tabla.
From its early stages, the community of Pattern Recognition and Computer Vision has considered the importance on leveraging the structural information when understanding images. Usually, graphs have been selected as the adequate framework to represent this kind of information due to their flexibility and representational power able to codify both, the components, objects or entities and their pairwise relationship. Even though graphs have been successfully applied to a huge variety of tasks, as a result of their symbolic and relational nature, graphs have always suffered from some limitations compared to statistical approaches. Indeed, some trivial mathematical operations do not have an equivalence in the graph domain. For instance, in the core of many pattern recognition application, there is the need to compare two objects. This operation, which is trivial when considering feature vectors, is not properly defined for graphs. Along this dissertation the main application domain has been on the topic of Document Image Analysis and Recognition. It is a subfield of Computer Vision aiming at understanding images of documents. In this context, the structure and in particular graph representations, provides a complementary dimension to the raw image contents. In computer vision, the first challenge we face is how to build a meaningful graph representation that is able to encode the relevant characteristics of a given image. This representation should find a trade off between the simplicity of the representation and its flexibility to represent the deformations appearing on each application domain. We applied our proposal to the word spotting application where strokes are divided into graphemes which are the smaller units of a handwritten alphabet. We have investigated different approaches to speed-up the graph comparison in order that word spotting, or more generally, a retrieval application is able to handle large collections of documents. On the one hand, a graph indexing framework combined with a votation scheme at node level is able to quickly prune unlikely results. On the other hand, making use of graph hierarchical representations, we are able to perform a coarse-to-fine matching scheme which performs most of the comparisons in a reduced graph representation. Besides, the hierarchical graph representation demonstrated to be drivers of a more robust scheme than the original graph. This new information is able to deal with noise and deformations in an elegant fashion. Therefore, we propose to exploit this information in a hierarchical graph embedding which allows the use of classical statistical techniques. Recently, the new advances on geometric deep learning, which has emerged as a generalization of deep learning methods to non-Euclidean domains such as graphs and manifolds, has raised again the attention to these representation schemes. Taking advantage of these new developments but considering traditional methodologies as a guideline, we proposed a graph metric learning framework able to obtain state-of-the-art results on different tasks. Finally, the contributions of this thesis have been validated in real industrial use case scenarios. For instance, an industrial collaboration has resulted in the development of a table detection framework in annonymized administrative documents containing sensitive data. In particular, the interest of the company is the automatic information extraction from invoices. In this scenario, graph neural networks have proved to be able to detect repetitive patterns which, after an aggregation process, constitute a table.
49

Jiao, Jieqing. "Spatio-temporal registration of dynamic PET data." Thesis, University of Oxford, 2014. https://ora.ox.ac.uk/objects/uuid:b011e3a4-aac9-4398-b78f-234fe9b4ae5d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Medical imaging plays an essential role in current clinical research and practice. Among the wealth of available imaging modalities, Positron Tomography Emission (PET) reveals functional processes in vivo by providing information on the interaction between a biological target and its tracer at the molecular level. A time series of PET images obtained from a dynamic scan depicts the spatio-temporal distribution of the PET tracer. Analysing the dynamic PET data then enables the quantification of the functional processes of interest for disease understanding and drug development. Given the time duration of a dynamic PET scan, which is usually 1-2 hours, any subject motion inevitably corrupts the tissue-tovoxel mapping during PET imaging, resulting in an unreliable analysis of the data for clinical decision making. Image registration has been applied to perform motion correction on misaligned dynamic PET frames, however, the current methods are solely based on spatial similarity. By ignoring the temporal changes due to PET tracer kinetics they can lead to inaccurate registration. In this thesis, a spatio-temporal registration framework of dynamic PET data is developed to overcome such limits. There are three scientific contributions made in this thesis. Firstly, the likelihood of dynamic PET data is formulated based on the generative model with both tracer kinetics and subject motion, providing a novel objective function. Secondly, the solution to the optimisation based on the generic plasma-input model is given, leading to the availability of a variety of biological targets. Thirdly, reference-input models are also incorporated to avoid blood sampling and thus extend the coverage of PET studies of the proposed framework. In the simulation-based validation, the proposed method achieves sub-voxel accuracy and its impact on clinical studies is evaluated on dopamine receptor data from an occupancy study, as well as breast cancer data from a reproducibility study. By successfully eliminating the motion artifacts as shown by visual inspection, the proposed method reduces the variability in clinical PET data and improves the confidence of deriving outcome measures on a study level. The motion correction algorithms developed in this thesis do not require any additional computational resources for a PET research centre, and they facilitate cost reduction by eliminating the need of acquiring extra PET scans in cases of motion corruption.
50

Åkesson, Lars. "Partial Volume Correction in PET/CT." Thesis, Stockholm University, Medical Radiation Physics (together with KI), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

In this thesis, a two-dimensional pixel-wise deconvolution method for partial volume correction (PVC) for combined Positron Emission Tomography and Computer Tomography (PET/CT) imaging has been developed. The method is based on Van Cittert's deconvolution algorithm and includes a noise reduction method based on adaptive smoothing and median filters. Furthermore, a technique to take into account the position dependent PET point spread function (PSF) and to reduce ringing artifacts is also described. The quantitative and qualitative performance of the proposed PVC algorithm was evaluated using phantom experiments with varying object size, background and noise level. PVC results in an increased activity recovery as well as image contrast enhancement. However, the quantitative performance of the algorithm is impaired by the presence of background activity and image noise. When applying the correction on clinical PET images, the result was an increase in standardized uptake values, up to 98% for small tumors in the lung. These results suggest that the PVC described in this work significantly improves activity recovery without producing excessive amount of ringing artifacts and noise amplification. The main limitations of the algorithm are the restriction to two dimensions and the lack of regularization constraints based on anatomical information from the co-registered CT images.

To the bibliography