Dissertations / Theses on the topic 'Images PET'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Images PET.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Cruz, Cavalcanti Yanna. "Factor analysis of dynamic PET images." Thesis, Toulouse, INPT, 2018. http://www.theses.fr/2018INPT0078/document.
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics
Batty, Stephen. "Content based retrieval of PET neurological images." Thesis, Middlesex University, 2004. http://eprints.mdx.ac.uk/9770/.
Pavarin, Alice. "Comparison of textural features in PET images: a phantom study." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Yu, Chin-Lung. "Methods for automated analysis of small-animal PET images." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1580851181&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.
RAPISARDA, EUGENIO. "Improvements in quality and quantification of 3D PET images." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/28157.
Jonsson, Sofia. "Evaluation of Methods for Obtaining an Image Derived Input Function from Dynamic PET-images." Thesis, Umeå universitet, Institutionen för fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-124426.
Sims, John Andrew. "Directional analysis of cardiac left ventricular motion from PET images." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-05092017-093020/.
A quantificação do movimento cardíaco do ventrículo esquerdo (VE) a partir de imagens médicas fornece um método não invasivo para o diagnóstico de doenças cardiovasculares (DCV). O estudo aqui proposto continua na mesma linha de pesquisa do nosso grupo sobre quantificação do movimento do VE por meio de técnicas de fluxo óptico (FO), aplicando estes métodos para quantificar o movimento do VE em sequências de imagens associadas às substâncias de cloreto de rubídio-82Rb (82Rb) e fluorodeoxiglucose-18F (FDG) PET. Com a extração dos campos vetoriais surgiram os seguintes desafios: (i) o campo vetorial de movimento (motion vector field, MVF) deve ser feito da forma mais precisa possível para maximizar a sensibilidade e especificidade; (ii) o MVF é extenso e composto de vetores 3D no espaço 3D, dificultando a análise visual de informações por observadores humanos para o diagnóstico médico. Foram desenvolvidas abordagens para melhorar a precisão da quantificação de movimento, considerando que o volume de interesse seja a região do MVF correspondente ao miocárdio do VE, em que valores de movimento não nulos existem fora deste volume devido aos artefatos do método de detecção de movimento ou de estruturas vizinhas, como o ventrículo direito. As melhorias na precisão foram obtidas segmentando o VE e ajustando os valores de MVF para zero fora do VE. O miocárdio VE foi segmentado automaticamente em fatias de eixo curto usando a Transformada de Hough na detecção de círculos para fornecer uma inicialização ao algoritmo de curvas de nível, um tipo de modelo deformável. A segmentação automática do VE atingiu 93,43% de medida de similaridade Dice, quando foi testado em 395 fatias de eixo menor de FDG, comparado com a segmentação manual. Estratégias para melhorar o desempenho do algoritmo OF nas bordas de movimento foram investigadas usando spatially varying averaging filters, aplicados em seqüências de imagens sintéticas. Os resultados mostraram melhorias na precisão de quantificação de movimento utilizando estes métodos. O Índice de Energia Cinética (KEf), um indicador de motilidade cardíaca, foi utilizado para avaliar 63 sujeitos com função cardíaca normal e alterada / baixa de uma base de dados de imagens PET de 82Rb. Foram realizados testes de sensibilidade e especificidade para avaliar o potencial de KEf para classificar a função cardíaca, utilizando a fração de ejeção do VE como padrão ouro. Foi construída uma curva ROC, que proporcionou uma área sob a curva de 0,906. A análise do movimento do VE pode ser simplificada pela visualização de componentes de campo de movimento direcional, ou seja, radial, rotacional (ou circunferencial) e linear, obtidos por decomposição automatizada. A decomposição discreta de Helmholtz Hodge (DHHD) foi utilizada para gerar estes componentes de forma automatizada, com uma validação utilizando campos de movimento cardíaco sintéticos a partir do conjunto Extended Cardiac Torso Phantom. Finalmente, o método DHHD foi aplicado a campos de FO, criado a partir de imagens FDG, permitindo uma análise de componentes direcionais de um indivíduo com função cardíaca normal e um paciente com baixa função e utilizando um marca-passo. A quantificação do campo de movimento a partir de imagens PET possibilita o desenvolvimento de novos indicadores para diagnosticar DCVs. A capacidade destes indicadores de motilidade depende na precisão da quantificação de movimento que, por sua vez, pode ser determinado por características das imagens de entrada como ruído. A análise de movimento fornece um promissor e sem precedente método para o diagnóstico de DCVs.
Farinha, Ricardo Jorge Pires Correia. "Segmentation of striatal brain structures from high resolution pet images." Master's thesis, FCT - UNL, 2008. http://hdl.handle.net/10362/2036.
We propose and evaluate fully automatic segmentation methods for the extraction of striatal brain surfaces (caudate, putamen, ventral striatum and white matter), from high resolution positron emission tomography (PET) images. In the preprocessing steps, both the right and the left striata were segmented from the high resolution PET images. This segmentation was achieved by delineating the brain surface, finding the plane that maximizes the reflective symmetry of the brain (mid-sagittal plane) and, finally, extracting the right and left striata from both hemisphere images. The delineation of the brain surface and the extraction of the striata were achieved using the DSM-OS (Surface Minimization – Outer Surface) algorithm. The segmentation of striatal brain surfaces from the striatal images can be separated into two sub-processes: the construction of a graph (named “voxel affinity matrix”) and the graph clustering. The voxel affinity matrix was built using a set of image features that accurately informs the clustering method on the relationship between image voxels. The features defining the similarity of pairwise voxels were spatial connectivity, intensity values, and Euclidean distances. The clustering process is treated as a graph partition problem using two methods, a spectral (multiway normalized cuts) and a non-spectral (weighted kernel k-means). The normalized cuts algorithm relies on the computation of the graph eigenvalues to partition the graph into connected regions. However, this method fails when applied to high resolution PET images due to the high computational requirements arising from the image size. On the other hand, the weighted kernel k-means classifies iteratively, with the aid of the image features, a given data set into a predefined number of clusters. The weighted kernel k-means and the normalized cuts algorithm are mathematically similar. After finding the optimal initial parameters for the weighted kernel k-means for this type of images, no further tuning is necessary for subsequent images. Our results showed that the putamen and ventral striatum were accurately segmented, while the caudate and white matter appeared to be merged in the same cluster. The putamen was divided in anterior and posterior areas. All the experiments resulted in the same type of segmentation, validating the reproducibility of our results.
Bieth, Marie. "Kinetic analysis and inter-subject registration of brain PET images." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119738.
L'imagerie à émission de positrons est de plus en plus utilisée pour comprendre le fonctionnement du cerveau. Ce mémoire aborde deux sujets liés àces images: le calcul du potentiel de liaison et l'alignement de deux images. Nous étudions tout d'abord l'influence de certains choix d'implémentation sur les estimations de potentiel de liaison. Ces travaux effectués sur des données simulées nous permettent de donner des points de repère concernant les choix à faire pour calculer le potentiel de liaison, ce qui constitue un pas important vers un calcul du potentiel de liaison entièrement automatisé etindépendant d'images à résonance magnétique. Nous introduisons ensuite une nouvelle méthode pour l'alignement de deux images de tomographie à émission de positrons. Cette méthode est adaptée de l'algorithme des log-démons difféomorphiques 3D. Nous montrons que notre méthode donne de meilleurs résultats que des méthodes existantes. Nous présentons aussi un modèle de haute résolution pour l'imagerie à émissionde positrons utilisant la [11C]raclopride. Ce modèle est construit à partir de 35sujets scannés sur le tomographe de recherche à haute résolution (High Resolution Research Tomograph). Comme il s'agit du tomographe de plus haute résolution disponible à ce jour, à notre connaissance, notre modèle est l'image de raclopride de plus haute résolution jamais produite.
Wang, Jiali. "Motion Correction Algorithm of Lung Tumors for Respiratory Gated PET Images." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/96.
Wang, Jiabin. "Variational Bayes inference based segmentation algorithms for brain PET-CT images." Thesis, The University of Sydney, 2012. https://hdl.handle.net/2123/29251.
Potesil, Vaclav. "Building computational atlases from databases of whole-body clinical PET/CT images." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558523.
Pacheco, Edward Flórez. "Quantificação da dinâmica de estruturas em imagens de medicina nuclear na modalidade PET." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-08052012-114807/.
The usefulness of Nuclear medicine nowadays as a modality to obtain medical images is very important, and it has turned into one of the main procedures utilized in Health Care Centers. Its great advantage is to analyze the metabolic behavior of the patient, by allowing early diagnosis. This project is based on medical images obtained by the PET modality (Positron Emission Tomography), which has won wide acceptance. Thus, we have developed an integral framework for processing Nuclear Medicine three-dimensional images of the PET modality, which is composed of consecutive steps that start with the generation of standard images (gold standard) by using simulated images or phantoms of the Left Ventricular Heart that were generated in this project, such as the ones obtained from the NCAT-4D software. Then Poisson quantum noise is introduced into the whole volume to simulate the characteristic noises in PET images and an analysis is performed in order to certify that the utilized noise is the Poisson noise effectively. Subsequently, the pre-processing is executed by using specific filters, such as the median filter, the weighted Gaussian filter, and the filter that joins the concepts of Anscombe Transformation and the Wiener filter. Then the segmentation, which is considered the most important and central part of the whole process, is implemented. The segmentation process is based on the Fuzzy Connectedness theory and for that purpose four different approaches were implemented: Generic algorithm, LIFO algorithm, kTetaFOEMS algorithm, and Dynamic Weight algorithm. Since the first three algorithms used specific weights that were selected by the user, an extra analysis was performed to determine the best segmentation constants that would reflect an accurate segmentation. Finally, at the end of the processing structure, an assessment procedure was used as a measurement tool to quantify some parameters that determined the level of efficiency and precision of our process and project. We have verified that the implemented algorithms (filters and segmentation algorithms) are fairly robust and achieve optimal results, assist to obtain, in the case of the Left Ventricular simulated, TP and FP rates in the order of 98.49 ± 0.27% and 2.19 ± 0.19%, respectively. With the set of procedures and choices made along of the processing structure, the project was concluded with the analysis of a volumes group from a real PET exam, obtaining the quantification of the volumes.
Desseroit, Marie-Charlotte. "Caractérisation et exploitation de l'hétérogénéité intra-tumorale des images multimodales TDM et TEP." Thesis, Brest, 2016. http://www.theses.fr/2016BRES0129/document.
Positron emission tomography (PET) / Computed tomography (CT) multi-modality imaging is the most commonly used imaging technique to diagnose and monitor patients in oncology. PET/CT images provide a global tissue density description (CT images) and a characterization of tumor metabolic activity (PET images). Further analysis of those images acquired in clinical routine supplied additional data as regards patient survival or treatment response. All those new data allow to describe the tumor phenotype and are generally grouped under the generic name of Radiomics. Nevertheless, the number of shape descriptors and texture features characterising tumors have significantly increased in recent years and those parameters can be sensitive to exctraction method or whether to imaging modality. During this thesis, parameters variability, computed on PET and CT images, was assessed thanks to a test-retest cohort : for each patient, two groups of PET/CT images, acquired under the same conditions but generated with an interval of few minutes, were available. Parameters classified as reliable after this analysis were exploited for survival analysis of patients in the context of non-small cell lug cancer (NSCLC).The construction of a prognostic model with those metrics permitted first to study the complementarity of PET and CT texture features. However, this nomogram has been generated by simply adding risk factors and not with a robust multi-parametric analysis method. In the second part, the same data were exploited to build a prognostic model using support vector machine (SVM) algorithm. The models thus generated were then tested on a prospective cohort currently being recruited to obtain preliminary results as regards the robustness of those nomograms
Pacheco, Edward Florez. "Análise da dinâmica e quantificação metabólica de imagens de medicina nuclear na modalidade PET/CT." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-24062016-141858/.
The presence of Nuclear Medicine as a medical imaging modality is one of the main procedures utilized nowadays in medical centers, and the great advantage of that procedure is its capacity to analyze the metabolic behavior of the patient, resulting in early diagnoses. However, the quantification in Nuclear Medicine is known to be complicated by many factors, such as degradations due to attenuation, scattering, reconstruction algorithms and assumed models. In this context, the goal of this project is to improve the accuracy and the precision of quantification in PET/CT images by means of realistic and well-controlled processes. For this purpose, we proposed to develop a framework, which consists in a set of consecutively interlinked steps that is initiated with the simulation of 3D anthropomorphic phantoms. These phantoms were used to generate realistic PET/CT projections by applying the GATE platform (with Monte Carlo simulation). Then a 3D image reconstruction was executed, followed by a filtering process (using the Anscombe/Wiener filter to reduce Poisson noise characteristic of this type of images) and, a segmentation process (based on the Fuzzy Connectedness theory). After defining the region of interest (ROI), input activity and output response curves are required for the compartment analysis in order to obtain the Metabolic Quantification of the selected organ or structure. Finally, in the same manner real images provided from the Heart Institute (InCor) of Hospital das Clínicas, Faculty of Medicine, University of São Paulo (HC-FMUSP) were analysed. Therefore, it is concluded that the three-dimensional filtering step using the Ascombe/Wiener filter was preponderant and had a high impact on the metabolic quantification process and on other important stages of the whole project.
Olsson, Johan. "Automated Method for Generation of Input Function in PET Studies using MVW-PC Images." Thesis, Uppsala University, Department of Information Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-101163.
Modeling is an approach for extracting quantitative values from PET. The signal from a reference region or from blood samples is used as reference. Since blood sampling is risky, this report presents an automated method based on MVW-PCA for using blood data from the images.
The study was performed on clinical PET data from several human brains using the tracer PIB. Two veins were found in a MVW-PC and an average of the TACs from the relevant locations was formed. Finally, a correcting function was calculated.
The curves generated from the image data were very similar to the curves generated from blood samples, with the largest errors in the beginning of the scan.
The used method shows potential for generating very good results if worked onmore. One of the strengths of the approach is that it is not limited to a specific tracer or time protocol, since the MVW-PC will be chosen depending on the weights for the first 60 seconds.
Razifar, Pasha. "Novel Approaches for Application of Principal Component Analysis on Dynamic PET Images for Improvement of Image Quality and Clinical Diagnosis." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6053.
Van, Tol Markus Lane. "A graph-based method for segmentation of tumors and lymph nodes in volumetric PET images." Thesis, University of Iowa, 2014. https://ir.uiowa.edu/etd/2290.
Wang, Hesheng. "Multimodality Images Analysis for Photodynamic Therapy of Prostate Cancer in Mouse Models." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1251311096.
Cupparo, Ilaria. "Region growing and fuzzy C-means algorithm segmentation for PET images of head-neck tumours." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18020/.
Aoki, Suely Midori. "Uma proposta para avaliação do desempenho de câmaras PET/SPECT." Universidade de São Paulo, 2002. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-16072013-160821/.
Positron emission tomography, PET, is a Nuclear Medicine technique that allows the study of human body\'s function and metabolism in many clinical problems, with the help of pharmaceuticals labeled with positron emitters. The most frequent applications occur in oncology, neurology and cardiology, through qualitative and quantitative analysis of these images. Currently, PET is performed in two manners: by using dedicated systems, consisted of rings of thousands of detectors operating in coincidence; or with the use of PET /SPECT cameras, formed by two scintillation detectors in coincidence, which are also used in SPECT studies (single photon emission tomography). The development of PET /SPECT systems made possible the studies with fluor-deoxiglucose, [18F]-FDG, a pharmaceutical labeled with 18F (positron emitter with 109 minutes physical half-life), for a large number of clinics and hospitals, mainly due to their economical accessibility when compared to the dedicated PET studies. In this present work, a method was developed for characterizing and evaluating a PET /SPECT system with two scintillation detectors and device with two point sources of 137Cs, designed to obtain the transmission images for the photon attenuation correction. lt is based on adaptations of the conventional tests of SPECT cameras, described in IAEA TecDoc - 602 - 1991 (\"international Atomic Energy Agency \" - IAEA), and those for dedicated PET systems, published in NEMA NU 2 - 1994 (\"National Electrical Manufacturers Association \" - NEMA). The results were organized in a set of testing protocols and tested in the ADAC Laboratories/Philips camera, the VertexlM - Plus EPIClM/MCDlM - AC, installed in the Radioisotopes Service of lnCor - HCFMUSP (Instituto do Coração - Hospital das clínicas da Faculdade de Medicina da Universidade de São Paulo). This camera was the first one installed in Brazil and is being used, predominantly, for oncological studies and miocardial viability. The radiopharmaceutical used was [18F]-FDG, supplied regularly by IPEN/CNEN-SP (Instituto de Pesquisas Energéticas e Nucleares I Comissão Nacional de Energia Nuclear - São Paulo), and the tomographic reconstruction was performed with the system software, using the standard parameters of the clinical protocols. Point sources suspended in air were used in the measurements of spatial resolution and linear sources immersed in water for scattering fraction and sensitivity measurements. In the evaluation of sensitivity, uniformity, true events, random events and dead time of the electronic system, a phantom was constructed specifically for the present work, from the instructions of NEMA NU 2 - 1994 for dedicated PET systems. The accuracy of the attenuation correction was verified from the images of the phantom with three inserts of different densities: water, air and Teflon. The resultant protocols can serve as a guideline for Programs of Quality Control and Assurance, as well as for the evaluation of the performance of PET /SPECT systems with two scintillation detectors in coincidence. lf implemented by clinical centers that use this type of equipment, it will enhance the quality and confidence of the resulting images, as well as their quantification.
Kumar, Ashnil. "A graph-based approach for the retrieval of multi-modality medical images." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9343.
Xu, Lina [Verfasser]. "Analyzing Tumor Lesions in PET/CT Images Using Deep Learning Methods and Physiological Models / Lina Xu." München : Verlag Dr. Hut, 2019. http://d-nb.info/1181514266/34.
Florea, Ioana. "Pet parametric imaging of acetylcholine esterase activity without arterial blood sampling in normal subjects and patients with neurovegetative disease." Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425120.
Rajkumar, Ravichandran [Verfasser], Irene Akademischer Betreuer] Neuner, and N. Jon [Akademischer Betreuer] [Shah. "Simultaneous trimodal MR/PET/EEG imaging : a study of the attenuation effect of EEG caps on PET images and a comparison of EEG microstates with resting state fMRI and FDG-PET measures / Ravichandran Rajkumar ; Irene Neuner, Nadim Joni Shah." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1225401666/34.
Rajkumar, Ravichandran Verfasser], Irene [Akademischer Betreuer] Neuner, and N. Jon [Akademischer Betreuer] [Shah. "Simultaneous trimodal MR/PET/EEG imaging : a study of the attenuation effect of EEG caps on PET images and a comparison of EEG microstates with resting state fMRI and FDG-PET measures / Ravichandran Rajkumar ; Irene Neuner, Nadim Joni Shah." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1225401666/34.
Tixier, Florent. "Caractérisation de l'hétérogénéité tumorale sur des images issues de la tomographie par émission de positons (TEP)." Phd thesis, Université de Bretagne occidentale - Brest, 2013. http://tel.archives-ouvertes.fr/tel-00991783.
Andersson, Jonathan. "Methods for automatic analysis of glucose uptake in adipose tissue using quantitative PET/MRI data." Thesis, Uppsala universitet, Enheten för radiologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233200.
Mi, Hongmei. "PDE modeling and feature selection : prediction of tumor evolution and patient outcome in therapeutic follow-up with FDG-PET images." Rouen, 2015. http://www.theses.fr/2015ROUES005.
Adaptive radiotherapy has the potential to improve patient’s outcome from a re-optimized treatment plan early or during the course of treatment by taking individual specificities into account. Predictive studies in patient’s therapeutic follow-up could be of interest in how to adapt treatment to each individual patient. In this thesis, we conduct two predictive studies using patient’s positron emission tomography (PET) imaging. The first study aims to predict tumor evolution during radiotherapy. We propose a patient-specific tumor growth model derived from the advection-reaction equation composed of three terms representing three biological processes respectively, where the tumor growth model parameters are estimated based on patient’s preceding sequential PET images. The second part of the thesis focuses on the case where frequent imaging of the tumor is not available. We therefore conduct another study whose objective is to select predictive factors, among PET-based and clinical characteristics, for patient’s outcome after treatment. Our second contribution is thus a wrapper feature selection method which searches forward in a hierarchical feature subset space, and evaluates feature subsets by their prediction performance using support vector machine (SVM) as the classifier. For the two predictive studies, promising results are obtained on real-world cancer-patient datasets
Millardet, Maël. "Amélioration de la quantification des images TEP à l'yttrium 90." Thesis, Ecole centrale de Nantes, 2022. https://tel.archives-ouvertes.fr/tel-03871632.
Yttrium-90 PET imaging is becoming increasingly popular. However, the probability that decay of a yttrium-90 nucleus will lead to the emission of a positron is only 3.2 × 10-5, and the reconstructed images are therefore characterised by a high level of noise, as well as a positive bias in low activity regions. To correct these problems, classical methods use penalised algorithms or allow negative values in the image. However, a study comparing and combining these different methods in the specific context of yttrium-90 was still missing at the beginning of this thesis. This thesis, therefore, aims to fill this gap. Unfortunately, the methods allowing negative values cannot be used directly in a dosimetric study. Therefore, this thesis starts by proposing a new method of post-processing the images, aiming to remove the negative values while keeping the average values as locally as possible. A complete multi-objective analysis of these different methods is then proposed. This thesis ends by laying the foundations of what could become an algorithm providing a set of adequate reconstruction hyper parameters from sinograms alone
Call, Daniel M. (Daniel Marcus) 1973. "A spectral analysis method to quantify the relative contribution of different length scales to heterogeneity in PET images of pulmonary function." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88846.
Hami, Abdoul-Azize Rihab. "Simulation des processus radiobiologiques basés sur l'imagerie pour l'évaluation de schémas thérapeutiques individualisés en radiothérapie." Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0002.
Radiotherapy is one of the principal cancer treatments. Despite its intensive use in clinical practice, itseffectiveness depends on several factors. Several studies showed that the tumor response to radiotherapy differ from one patient to another. The response of tumor is influenced by several factors like hypoxia and multiple interactions between the tumor microenvironment and healthy cells. Five major biologic concepts called “5 Rs” resume these interactions. These concepts include reoxygenation, DNA damage-repair, cell cycle redistribution, cellular radiosensitivity and cellular repopulation.The optimal treatment strategy must consider these “5 Rs". In this study, we proposed as a first an approach to oxygenation modeling that can be considered as an optimization process in the absence of data concerning oxygen. We used a multi-scale model to predict the effects of radiotherapy on tumor growth based on information extracted from positron-emission tomography (PET) images. Then, we included to our model the ‘’5 Rs’’ of radiotherapy, to predict the effects of radiation on tumor growth. Finally, we presented a study of the effect of different types of fractionations on tumor response to radiotherapy
Tomasi, Giampaolo. "Bayesian and population approaches for pixel-wise quantification of positron emission Tomography images: ridge regression and Global-Two-Stage." Doctoral thesis, Università degli studi di Padova, 2007. http://hdl.handle.net/11577/3425164.
Jaouen, Vincent. "Traitement des images multicomposantes par EDP : application à l'imagerie TEP dynamique." Thesis, Tours, 2016. http://www.theses.fr/2016TOUR3303/document.
This thesis presents several methodological contributions to the processing of vector-valued images, with dynamic positron emission tomography imaging (dPET) as its target application. dPET imaging is a functional imaging modality that produces highly degraded images composed of subsequent temporal acquisitions. Vector-valued images often present some level of redundancy or complementarity of information along the channels, allowing the enhancement of processing results. Our first contribution exploits such properties for performing robust segmentation of target volumes with deformable models.We propose a new external force field to guide deformable models toward the vector edges of regions of interest. Our second contribution deals with the restoration of such images to further facilitate their analysis. We propose a new partial differential equation-based approach that enhances the signal to noise ratio of degraded images while sharpening their edges. Applied to dPET imaging, we show to what extent our methodological contributions can help to solve an open problem in neuroscience : noninvasive quantification of neuroinflammation
Gaillard, Maxence. "Les images du cerveau : epistémologie de l'usage de l'imagerie cérébrale en sciences cognitives." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1023.
At a general level, this dissertation in philosophy and history of cognitive science is devoted to the underestimated problem of scientific instruments. It is focused on some functional brain imaging techniques introduced in the field of cognitive studies during the 1980’s and 1990’s, the impact of such new technologies being sometimes compared to an instrumental revolution, in a way similar to the impact of the invention of the telescope on post-Galilean astronomy. The first part consists in a philosophical and historical analysis of the notion of scientific instrument. In this regard, some issues are raises and some hypotheses are formulated. The second part presents an interpretation of the historical emergence of Positron Emission Tomography and functional Magnetic Resonance Imaging. Dealing with details of the invention and circulation of those techniques, it shows in particular the entanglement of the validation procedures of instruments with the various scientific and societal mechanisms driving to their development and use. Taking its roots in the general analysis of the first part and the historical interpretation of the second part, the third part looks into the impact of the new functional brain imaging technologies on the evolution of cognitive science and the diffusion of its results in other domains. Concerning both cognitive science and larger aspects, it is argued that brain imaging is less a factor of resolution of specific questions than a factor of shifting in the problematics and the theoretical and societal significance of cognitive science
Zbib, Hiba. "Segmentation d'images TEP dynamiques par classification spectrale automatique et déterministe." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR3317/document.
Quantification of dynamic PET images is a powerful tool for the in vivo study of the functionality of tissues. However, this quantification requires the definition of regions of interest for extracting the time activity curves. These regions are usually identified manually by an expert operator, which reinforces their subjectivity. As a result, there is a growing interest in the development of clustering methods that aim to separate the dynamic PET sequence into functional regions based on the temporal profiles of voxels. In this thesis, a spectral clustering method of the temporal profiles of voxels that has the advantage of handling nonlinear clusters is developed. The method is extended to make it more suited for clinical applications. First, a global search procedure is used to locate in a deterministic way the optimal cluster centroids from the projected data. Second an unsupervised clustering criterion is proposed and optimised by the simulated annealing to automatically estimate the scale parameter and the weighting factors involved in the method. The proposed automatic and deterministic spectral clustering method is validated on simulated and real images and compared to two other segmentation methods from the literature. It improves the ROI definition, and appears as a promising pre-processing tool before ROI-based quantification and input function estimation tasks
Bieth, Marie Verfasser], Bjoern Holger [Akademischer Betreuer] [Gutachter] [Menze, and Markus [Gutachter] Schwaiger. "Localising Anatomical Structures and Quantifying Tumour Burden in PET/CT Images using Machine Learning / Marie Bieth ; Gutachter: Björn Menze, Markus Schwaiger ; Betreuer: Björn Menze." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1147968209/34.
Roman, Jimenez Geoffrey. "Analyse des images de tomographie par émission de positons pour la prédiction de récidive du cancer du col de l'utérus." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S037/document.
This thesis deals with the issue of predicting the recurrence within the context of cervical cancer radiotherapy. The objective was to analyze positron emission tomography (PET) with 18F-fluorodeoxyglucose (18F-FDG) to extract quantitative parameters that could show statistical correlation with tumor recurrence. Six study were performed to address 18F-FDG PET imaging issues such as the presence of bladder uptake artifacts, tumor segmentation impact, as well as the analysis of tumor evolution along the treatment. Statistical analyses were performed among parameters reflecting intensity, shape and texture of the tumor metabolism before, and during treatment. Results show that the pre-treatment metabolic tumor volume and the per-treatment total lesion glycolysis are the most promising parameters for cervical cancer recurrence prediction. In addition, combinations of these parameters with shape descriptors and texture features, using machine-learning methods or regression models, are able to increase the prediction capability
Zheng, Yiran. "CT-PET Image Fusion and PET Image Segmentation for Radiation Therapy." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1283542509.
Garali, Imène. "Aide au diagnostic de la maladie d’Alzheimer par des techniques de sélection d’attributs pertinents dans des images cérébrales fonctionnelles obtenues par tomographie par émission de positons au 18FDG." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4364/document.
Our research focuses on presenting a novel computer-aided diagnosis technique for brain Positrons Emission Tomography (PET) images. It processes and analyzes quantitatively these images, in order to better characterize and extract meaningful information for medical diagnosis. Our contribution is to present a new method of classifying brain 18 FDG PET images. Brain images are first segmented into 116 Regions Of Interest (ROI) using an atlas. After computing some statistical features (mean, standarddeviation, skewness, kurtosis and entropy) on these regions’ histogram, we defined a Separation Power Factor (SPF) associated to each region. This factor quantifies the ability of each region to separate neurodegenerative diseases like Alzheimer disease from Healthy Control (HC) brain images. A novel region-based approach is developed to classify brain 18FDG-PET images. The motivation of this work is to identify the best regional features for separating HC from AD patients, in order to reduce the number of features required to achieve an acceptable classification result while reducing computational time required for the classification task
Dey, Sounak. "Mapping between Images and Conceptual Spaces: Sketch-based Image Retrieval." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671082.
El diluvio de contenido visual en Internet, desde contenido generado por el usuario hasta colecciones de imágenes comerciales, motiva nuevos métodos intuitivos para buscar contenido de imágenes digitales: ¿cómo podemos encontrar ciertas imágenes en una base de datos de millones? La recuperación de imágenes basada en bocetos (SBIR) es un tema de investigación emergente en el que se puede usar un dibujo a mano libre para consultar visualmente imágenes fotográficas. SBIR está alineado con las tendencias emergentes para el consumo de contenido visual en dispositivos móviles con pantalla táctil, para los cuales las interacciones gestuales como el boceto son una alternativa natural a la entrada de texto. Esta tesis presenta varias contribuciones a la literatura de SBIR. En primer lugar, proponemos un marco de aprendizaje multimodal que mapea tanto los bocetos como el texto en un espacio de incrustación conjunto invariante al estilo representativo, al tiempo que conserva la semántica. La incrustación resultante permite la comparación directa y la búsqueda entre bocetos / texto e imágenes y se basa en una red neuronal convolucional de múltiples ramas (CNN) entrenada utilizando esquemas de entrenamiento únicos. La incrustación profundamente aprendida muestra un rendimiento de recuperación de última generación en varios puntos de referencia SBIR. En segundo lugar, proponemos un enfoque para la recuperación de imágenes multimodales en imágenes con etiquetas múltiples. Una arquitectura de red profunda multimodal está formulada para modelar conjuntamente bocetos y texto como modalidades de consulta de entrada en un espacio de incrustación común, que luego se alinea aún más con el espacio de características de la imagen. Nuestra arquitectura también se basa en una detección de objetos sobresalientes a través de un modelo de atención visual supervisado basado en LSTM aprendido de las características convolucionales. Tanto la alineación entre las consultas y la imagen como la supervisión de la atención en las imágenes se obtienen generalizando el algoritmo húngaro utilizando diferentes funciones de pérdida. Esto permite codificar las características basadas en objetos y su alineación con la consulta independientemente de la disponibilidad de la concurrencia de diferentes objetos en el conjunto de entrenamiento. Validamos el rendimiento de nuestro enfoque en conjuntos de datos estándar de objeto único / múltiple, mostrando el rendimiento más avanzado en cada conjunto de datos SBIR. En tercer lugar, investigamos el problema de la recuperación de imágenes basadas en bocetos de disparo cero (ZS-SBIR), donde los bocetos humanos se utilizan como consultas para llevar a cabo la recuperación de fotos de categorías invisibles. Avanzamos de manera importante en las técnicas anteriores al proponer un nuevo escenario ZS-SBIR que representa un firme paso adelante en su aplicación práctica. El nuevo entorno reconoce de manera única dos desafíos importantes pero a menudo descuidados de la práctica ZS-SBIR, (i) la gran brecha de dominio entre el boceto aficionado y la foto, y (ii) la necesidad de avanzar hacia la recuperación a gran escala. Primero contribuimos a la comunidad con un nuevo conjunto de datos ZS-SBIR, QuickDraw -Extended, que consta de bocetos de $ 330,000 $ y fotos de $ 204,000 $ que abarcan 110 categorías. Los bocetos humanos aficionados altamente abstractos se obtienen a propósito para maximizar la brecha de dominio, en lugar de los incluidos en los conjuntos de datos existentes que a menudo pueden ser semi-fotorrealistas. Luego formulamos un marco ZS-SBIR para modelar conjuntamente bocetos y fotos en un espacio de incrustación común.
The deluge of visual content on the Internet – from user-generated content to commercial image collections - motivates intuitive new methods for searching digital image content: how can we find certain images in a database of millions? Sketch-based image retrieval (SBIR) is an emerging research topic in which a free-hand drawing can be used to visually query photographic images. SBIR is aligned to emerging trends for visual content consumption on mobile touch-screen based devices, for which gestural interactions such as sketch are a natural alternative to textual input. This thesis presents several contributions to the literature of SBIR. First, we propose a cross-modal learning framework that maps both sketches and text into a joint embedding space invariant to depictive style, while preserving semantics. The resulting embedding enables direct comparison and search between sketches/text and images and is based upon a multi-branch convolutional neural network (CNN) trained using unique training schemes. The deeply learned embedding is shown to yield state-of-art retrieval performance on several SBIR benchmarks. Second, we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sket-ches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model lear-ned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every SBIR dataset. Third, we investigate the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended, that consists of $330,000$ sketches and $204,000$ photos spanning across 110 categories. Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic. We then formulate a ZS-SBIR framework to jointly model sketches and photos into a common embedding space. A novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. External semantic knowledge is further embedded to aid semantic transfer. We show that, rather surprisingly, retrieval performance significantly outperforms that of state-of-the-art on existing datasets that can already be achieved using a reduced version of our model. We further demonstrate the superior performance of our full model by comparing with a number of alternatives on the newly proposed dataset.
Gu, Wei Q. "Automated tracer-independent MRI/PET image registration." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29596.pdf.
Giovagnoli, Debora. "Image reconstruction for three-gamma PET imaging." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0219.
In this thesis we present three-gamma imaging, where the acquisition system relies on a beta+ and gamma emitter. The rationale of 3-gamma imaging is that the third gamma detection information may help to provide better localization of the annihilation point, thus enabling higher image quality and fewer dose delivered to the patient. We present the 3-gamma system, theXEMIS2, developed at Subatech, Nantes, that is a LiquidXenon detector suitable for 3-gamma imaging due to its stopping power, its scintillation characteristics and its continuous geometry. The principle of 3-gamma image reconstruction is based on the intersection of a LOR, obtained from the coincidence photons, with a Compton cone, determined by the third gamma. The idea is to find the LOR\cone intersection and use it to locate the most probable annihilation position on the line,as for the time difference in TOF-PET. We present a complete GATE simulation study of two phantoms (similar-NEMA and Digimouse), to assess the improvements of 3-gamma image reconstruction over conventional PET and we study the positron range correction, which is important for our beta+gamma emitter, Sc44
Lee, Ki Sung. "Pragmatic image reconstruction for high resolution PET scanners /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/5967.
Muñoz, Pujol Xavier 1976. "Image segmentation integrating colour, texture and boundary information." Doctoral thesis, Universitat de Girona, 2003. http://hdl.handle.net/10803/7719.
Se propone una estrategia basada en el uso complementario de la información de región y de frontera durante el proceso de segmentación, integración que permite paliar algunos de los problemas básicos de la segmentación tradicional. La información de frontera permite inicialmente identificar el número de regiones presentes en la imagen y colocar en el interior de cada una de ellas una semilla, con el objetivo de modelar estadísticamente las características de las regiones y definir de esta forma la información de región. Esta información, conjuntamente con la información de frontera, es utilizada en la definición de una función de energía que expresa las propiedades requeridas a la segmentación deseada: uniformidad en el interior de las regiones y contraste con las regiones vecinas en los límites. Un conjunto de regiones activas inician entonces su crecimiento, compitiendo por los píxeles de la imagen, con el objetivo de optimizar la función de energía o, en otras palabras, encontrar la segmentación que mejor se adecua a los requerimientos exprsados en dicha función. Finalmente, todo esta proceso ha sido considerado en una estructura piramidal, lo que nos permite refinar progresivamente el resultado de la segmentación y mejorar su coste computacional.
La estrategia ha sido extendida al problema de segmentación de texturas, lo que implica algunas consideraciones básicas como el modelaje de las regiones a partir de un conjunto de características de textura y la extracción de la información de frontera cuando la textura es presente en la imagen.
Finalmente, se ha llevado a cabo la extensión a la segmentación de imágenes teniendo en cuenta las propiedades de color y textura. En este sentido, el uso conjunto de técnicas no-paramétricas de estimación de la función de densidad para la descripción del color, y de características textuales basadas en la matriz de co-ocurrencia, ha sido propuesto para modelar adecuadamente y de forma completa las regiones de la imagen.
La propuesta ha sido evaluada de forma objetiva y comparada con distintas técnicas de integración utilizando imágenes sintéticas. Además, se han incluido experimentos con imágenes reales con resultados muy positivos.
Image segmentation is an important research area in computer vision and many segmentation methods have been proposed. However, elemental segmentation techniques based on boundary or region approaches often fail to produce accurate segmentation results. Hence, in the last few years, there has been a tendency towards the integration of both techniques in order to improve the results by taking into account the complementary nature of such information. This thesis proposes a solution to the image segmentation integrating region and boundary information. Moreover, the method is extended to texture and colour texture segmentation.
An exhaustive analysis of image segmentation techniques which integrate region and boundary information is carried out. Main strategies to perform the integration are identified and a classification of these approaches is proposed. Thus, the most relevant proposals are assorted and grouped in their corresponding approach. Moreover, characteristics of these strategies as well as the general lack of attention that is given to the texture is noted. The discussion of these aspects has been the origin of all the work evolved in this thesis, giving rise to two basic conclusions: first, the possibility of fusing several approaches to the integration of both information sources, and second, the necessity of a specific treatment for textured images.
Next, an unsupervised segmentation strategy which integrates region and boundary information and incorporates three different approaches identified in the previous review is proposed. Specifically, the proposed image segmentation method combines the guidance of seed placement, the control of decision criterion and the boundary refinement approaches. The method is composed by two basic stages: initialisation and segmentation. Thus, in the first stage, the main contours of the image are used to identify the different regions present in the image and to adequately place a seed for each one in order to statistically model the region. Then, the segmentation stage is performed based on the active region model which allows us to take region and boundary information into account in order to segment the whole image. Specifically, regions start to shrink and expand guided by the optimisation of an energy function that ensures homogeneity properties inside regions and the presence of real edges at boundaries. Furthermore, with the aim of imitating the Human Vision System when a person is slowly approaching to a distant object, a pyramidal structure is considered. Hence, the method has been designed on a pyramidal representation which allows us to refine the region boundaries from a coarse to a fine resolution, and ensuring noise robustness as well as computation efficiency.
The proposed segmentation strategy is then adapted to solve the problem of texture and colour texture segmentation. First, the proposed strategy is extended to texture segmentation which involves some considerations as the region modelling and the extraction of texture boundary information. Next, a method to integrate colour and textural properties is proposed, which is based on the use of texture descriptors and the estimation of colour behaviour by using non-parametric techniques of density estimation. Hence, the proposed strategy of segmentation is considered for the segmentation taking both colour and textural properties into account.
Finally, the proposal of image segmentation strategy is objectively evaluated and then compared with some other relevant algorithms corresponding to the different strategies of region and boundary integration. Moreover, an evaluation of the segmentation results obtained on colour texture segmentation is performed. Furthermore, results on a wide set of real images are shown and discussed.
Caresia, Aróztegui Ana Paula. "PET/TC en el cáncer de ovario: Estadificación inicial, valoración de la resecabilidad primaria y la respuesta a la quimioterapia neoadyuvante." Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/403771.
Ovarian cancer has a poor prognosis, because the majority of patients have advanced disease at the time of diagnosis. Although 18F-FDG PET/CT is widely used in oncology, the guidelines for ovarian cancer (National Comprehensive Cancer Network, European Society of Medical Oncology and Sociedad Española de Ginecología y Obstetricia) clearly indicating 18F-FDG PET/CT in recurrence. The Society of Gynecologic Oncology and the National Comprehensive Cancer Network recently included PET/CT as an option for the initial staging of locally advanced disease. We compared PET/CT and CT in the initial staging of ovarian cancer. PET/CT detected distant metastases in more patients than CT (40.74% vs. 11.11%). The most frequent locations of distant metastases detected by PET/CT were the supradiaphragmatic lymph nodes and pleura. PET/CT changed FIGO staging compared with CT in 59.25% of cases, especially due to unsuspected peritoneal metastasis or distant metastases. However, PET/CT changed treatment management in only 25.9% patients. We also compared abdominal PET/CT and abdominal CT in the assessment of resectability, with respect to laparoscopic valuation (gold standard). Abdominal PET/CT correlated better than abdominal CT with diagnostic laparoscopy findings in terms of resectability (K=0.684 for PET/CT compared to K=0.419 for CT alone). Abdominal PET/CT findings were concordant with surgical stage in 85.18% of patients and abdominal CT was concordant in 70.4% of patients. Discrepancies between PET/CT and laparoscopic findings were explained by extra-abdominal disease detected by PET/CT or miliary peritoneal metastases detected by laparoscopy and seen less clearly by PET/CT. In a group of patients with locally advanced ovarian cancer (FIGO IIIC or IV). We compared PET/CT and Gynecologic Cancer InterGroup method in evaluation response of neoadjuvant chemotherapy in patients with primary unresectable locally advanced ovarian cancer. The change in the SUVmax of the primary tumor (∆SUVprimary) and the overall change in SUVmax in the study (∆SUVGlobal) predicted platinum sensitivity but not interval resectability or histopathological response. Decreases in SUVmax (∆SUVGlobal ≥69.78% (accuracy=76.62%) or ∆SUVprimary ≥61.87% (accuracy=73.07%)) identified platinum responders or non-responders better than the Gynecologic Cancer InterGroup method (accuracy=50%).
Williamitis, Joseph M. "Using fMRI BOLD Imaging to Motion-Correct Associated, Simultaneously Imaged PET Data." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1620585748146734.
Riba, Fiérrez Pau. "Distilling Structure from Imagery: Graph-based Models for the Interpretation of Document Images." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/670774.
La comunidad que investiga el reconocimiento de patrones y la visión por computador ha reconocido la importancia de aprovechar la información estructural de las imágenes. Los grafos se han seleccionado como el marco adecuado para representar este tipo de información a causa de su flexibilidad y poder de representación capaz de codificar los componentes, los objetos, las entidades y sus relaciones. Aunque los grafos se han aplicado con éxito a una gran variedad de tareas –como resultado de su naturaleza simbólica y relacional–, siempre han sufrido algunas limitaciones comparados con los métodos estadísticos. Esto se debe al hecho que algunas operaciones matemáticas triviales no tienen una equivalencia en el dominio de los grafos. Por ejemplo, en la base de la mayoría de aplicaciones de reconocimiento de patrones hay la necesidad de comparar objetos. No obstante, esta operación trivial no está debidamente definida por grafos cuando consideramos vectores de características. Durante la investigación, el principal dominio de aplicación se basa en el Análisis y Reconocimiento de Imágenes de Documentos. Este es un subcampo de la Visión por Computador que tiene como objetivo comprender imágenes de documentos. En este contexto la estructura -particularmente la representación en forma de grafo- proporciona una dimensión complementaria al contenido de la imágen. En Visión por Computador la primera dificultad que nos encontramos se basa en construir una representación significativa de grafos que sea capaz de codificar las características relevantes de una imagen. Esto se debe a que es un proceso que tiene que encontrar un equilibrio entre la simplicidad de la representación y la flexibilidad, para representar las diferentes deformaciones que aparecen en cada dominio de la aplicación. Hemos estudiado este tema en la aplicación de la búsqueda de palabras, dividiendo los diferentes trazos en grafemas –las unidades más pequeñas de un alfabeto manuscrito–. Tambien, hemos investigado diferentes metodologías para acelerar el proceso de comparación entre grafos para que la búsqueda de palabras o, incluso, de forma más general, la aplicación de búsqueda de grafos, pueda incluir grandes colecciones de documentos. Estas metodologías han estado principalmente dos: (a) un sistema de indexación de grafos combinado con un sistema de votación en el ámbito de los nodos capaces de eliminar resultados improbables y (b) usando representaciones jerárquicas de grafos que llevan a término la mayoría de las comparaciones en una versión reducida del grafo original mediante comparativas entre los niveles más abstractos y los más detallados. Asimismo, la representación jerárquica también ha demostrado obtener una representación más robusta que el grafo original, además de lidiar con el ruido y las deformaciones de manera elegante. Así pues, proponemos explotar esta información en forma de codificación jerárquica del grafo que permita utilizar técnicas estadísticas clásicas. Los nuevos avances en el aprendizaje profundo geométrico han aparecido como una generalización de las metodologías de aprendizaje profundo aplicadas a dominios no Euclidianos –como grafos y variedades– y han promovido un gran interés en la comunidad científica por estos esquemas de representación. Proponemos una distancia de grafos capaz de obtener resultados comparables al estado del arte en diferentes tareas aprovechando estos nuevos desarrollos, pero considerando las metodologías tradicionales como base. También hemos realizado una colaboración industrial con la finalidad de extraer información automática de las facturas de la empresa (con datos anónimos). El resultado ha sido el desarrollo de un sistema de detección de tablas en documentos administrativos. Así pues, las redes neuronales basadas en grafos han demostrado ser aptas para detectar patrones repetitivos, los cuales, después de un proceso de agregación, constituyen una tabla.
From its early stages, the community of Pattern Recognition and Computer Vision has considered the importance on leveraging the structural information when understanding images. Usually, graphs have been selected as the adequate framework to represent this kind of information due to their flexibility and representational power able to codify both, the components, objects or entities and their pairwise relationship. Even though graphs have been successfully applied to a huge variety of tasks, as a result of their symbolic and relational nature, graphs have always suffered from some limitations compared to statistical approaches. Indeed, some trivial mathematical operations do not have an equivalence in the graph domain. For instance, in the core of many pattern recognition application, there is the need to compare two objects. This operation, which is trivial when considering feature vectors, is not properly defined for graphs. Along this dissertation the main application domain has been on the topic of Document Image Analysis and Recognition. It is a subfield of Computer Vision aiming at understanding images of documents. In this context, the structure and in particular graph representations, provides a complementary dimension to the raw image contents. In computer vision, the first challenge we face is how to build a meaningful graph representation that is able to encode the relevant characteristics of a given image. This representation should find a trade off between the simplicity of the representation and its flexibility to represent the deformations appearing on each application domain. We applied our proposal to the word spotting application where strokes are divided into graphemes which are the smaller units of a handwritten alphabet. We have investigated different approaches to speed-up the graph comparison in order that word spotting, or more generally, a retrieval application is able to handle large collections of documents. On the one hand, a graph indexing framework combined with a votation scheme at node level is able to quickly prune unlikely results. On the other hand, making use of graph hierarchical representations, we are able to perform a coarse-to-fine matching scheme which performs most of the comparisons in a reduced graph representation. Besides, the hierarchical graph representation demonstrated to be drivers of a more robust scheme than the original graph. This new information is able to deal with noise and deformations in an elegant fashion. Therefore, we propose to exploit this information in a hierarchical graph embedding which allows the use of classical statistical techniques. Recently, the new advances on geometric deep learning, which has emerged as a generalization of deep learning methods to non-Euclidean domains such as graphs and manifolds, has raised again the attention to these representation schemes. Taking advantage of these new developments but considering traditional methodologies as a guideline, we proposed a graph metric learning framework able to obtain state-of-the-art results on different tasks. Finally, the contributions of this thesis have been validated in real industrial use case scenarios. For instance, an industrial collaboration has resulted in the development of a table detection framework in annonymized administrative documents containing sensitive data. In particular, the interest of the company is the automatic information extraction from invoices. In this scenario, graph neural networks have proved to be able to detect repetitive patterns which, after an aggregation process, constitute a table.
Jiao, Jieqing. "Spatio-temporal registration of dynamic PET data." Thesis, University of Oxford, 2014. https://ora.ox.ac.uk/objects/uuid:b011e3a4-aac9-4398-b78f-234fe9b4ae5d.
Åkesson, Lars. "Partial Volume Correction in PET/CT." Thesis, Stockholm University, Medical Radiation Physics (together with KI), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8322.
In this thesis, a two-dimensional pixel-wise deconvolution method for partial volume correction (PVC) for combined Positron Emission Tomography and Computer Tomography (PET/CT) imaging has been developed. The method is based on Van Cittert's deconvolution algorithm and includes a noise reduction method based on adaptive smoothing and median filters. Furthermore, a technique to take into account the position dependent PET point spread function (PSF) and to reduce ringing artifacts is also described. The quantitative and qualitative performance of the proposed PVC algorithm was evaluated using phantom experiments with varying object size, background and noise level. PVC results in an increased activity recovery as well as image contrast enhancement. However, the quantitative performance of the algorithm is impaired by the presence of background activity and image noise. When applying the correction on clinical PET images, the result was an increase in standardized uptake values, up to 98% for small tumors in the lung. These results suggest that the PVC described in this work significantly improves activity recovery without producing excessive amount of ringing artifacts and noise amplification. The main limitations of the algorithm are the restriction to two dimensions and the lack of regularization constraints based on anatomical information from the co-registered CT images.