Journal articles on the topic 'Images PET'

To see the other types of publications on this topic, follow the link: Images PET.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Images PET.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Muraglia, Lorenzo, Francesco Mattana, Laura Lavinia Travaini, Gennaro Musi, Emilio Bertani, Giuseppe Renne, Eleonora Pisa, et al. "First Live-Experience Session with PET/CT Specimen Imager: A Pilot Analysis in Prostate Cancer and Neuroendocrine Tumor." Biomedicines 11, no. 2 (February 20, 2023): 645. http://dx.doi.org/10.3390/biomedicines11020645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objective: to evaluate the feasibility of the intra-operative application of a specimen PET/CT imager in a clinical setting. Materials and methods: this is a pilot analysis performed in three patients who received an intra-operative administration of 68Ga-PSMA-11 (n = 2) and 68Ga-DOTA-TOC (n = 1), respectively. Patients were administrated with PET radiopharmaceuticals to perform radio-guided surgery with a beta-probe detector during radical prostatectomy for prostate cancer (PCa) and salvage lymphadenectomy for recurrent neuroendocrine tumor (NET) of the ileum, respectively. All procedures have been performed within two ongoing clinical trials in our Institute (NCT05596851 and NCT05448157). Pathologic assessment with immunohistochemistry (PSMA-staining and SSA immunoreactivity) was considered as standard of truth. Specimen images were compared with baseline PET/CT images and histopathological analysis. Results: Patients received 1 MBq/Kg of 68Ga-PSMA-11 (PCa) or 1.2 MBq/Kg of 68Ga-DOTA-TOC (NET) prior to surgery. Specimens were collected, positioned in the dedicated specimen container, and scanned to obtain high-resolution PET/CT images. In all cases, a perfect match was observed between the findings detected by the specimen imager and histopathology. Overall, the PET spatial resolution was sensibly higher for the specimen images compared to the baseline whole-body PET/CT images. Furthermore, the use of the PET/CT specimen imager did not significantly interfere with any procedures, and the overall length of the surgery was not affected using the PET/CT specimen imager. Finally, the radiation exposure of the operating theater staff was lower than 40 µSv per procedure (range 26–40 μSv). Conclusions: the image acquisition of specimens obtained by patients who received intra-surgery injections of 68Ga-PSMA-11 and 68Ga-DOTA-TOC was feasible and reliable also in a live-experience session and has been easily adapted to surgery daily practice. The high sensitivity, together with the evaluation of intra-lesion tumor heterogeneity, were the most relevant results since the data derived from specimen PET/CT imaging matched perfectly with the histopathological analysis.
2

Gershon, Nahum D. "Visualizing 3D PET Images." IEEE Computer Graphics and Applications 11, no. 5 (September 1991): 11–13. http://dx.doi.org/10.1109/mcg.1991.10040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jiang, Changhui, Xu Zhang, Na Zhang, Qiyang Zhang, Chao Zhou, Jianmin Yuan, Qiang He, et al. "Synthesizing PET/MR (T1-weighted) images from non-attenuation-corrected PET images." Physics in Medicine & Biology 66, no. 13 (June 24, 2021): 135006. http://dx.doi.org/10.1088/1361-6560/ac08b2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pietrzyk, U., C. Knoess, S. Vollmar, K. Wienhard, L. Kracht, A. Bockisch, S. Maderwald, H. Kühl, M. Fitzek, and T. Beyer. "Multi-modality imaging of uveal melanomas using combined PET/CT, high-resolution PET and MR imaging." Nuklearmedizin 47, no. 02 (2008): 73–79. http://dx.doi.org/10.3413/nukmed-0125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
SummaryWe investigated the efficacy of combined FDG-PET/CT imaging for the diagnosis of small-size uveal melanomas and the feasibility of combining separate, high-resolution (HR) FDG-PET with MRI for its improved localization and detection. Patients, methods: 3 patients with small-size uveal melanomas (0.2–1.5 ml) were imaged on a combined whole-body PET/CT, a HR brain-PET, and a 1.5 T MRI. Static, contrast-enhanced FDG-PET/CT imaging was performed of head and torso with CT contrast enhancement. HR PET imaging was performed in dynamic mode 0–180 min post-injection of FDG. MRI imaging was performed using a high-resolution small-loop-coil placed over the eye in question with T2–3D-TSE and T1–3D-SE with 18 ml Gd-contrast. Patients had their eyes shaded during the scans. Lesion visibility on high-resolution FDGPET images was graded for confidence: 1: none, 2: suggestive, 3: clear. Mean tumour activity was calculated for summed image frames that resulted in confidence grades 2 and 3. Whole-body FDG-PET/CT images were reviewed for lesions. PET-MRI and PET/ CT-MRI images of the head were co-registered for potentially improved lesion delineation. Results: Whole-body FDG-PET/CT images of 3/3 patients were positive for uveal melanomas and negative for disseminated disease. HR FDG-PET was positive already in the early time frames. One patient exhibited rising tumour activity with increasing uptake time on FDG-PET. MRI images of the eye were co-registered successfully to FDG-PET/CT using a manual alignment approach. Conclusions: Small-size uveal melanomas can be detected with whole-body FDG-PET/CT. This feasibility study suggests the exploration of HR FDG-PET in order to provide additional diagnostic information on patients with uveal melanomas. First results support extended uptake times and high-sensitivity PET for improved tumour visibility. MRI/PET co-registration is feasible and provides correlated functional and anatomical information that may support alternative therapy regimens.
5

Suganuma, Yuta, Atsushi Teramoto, Kuniaki Saito, Hiroshi Fujita, Yuki Suzuki, Noriyuki Tomiyama, and Shoji Kido. "Hybrid Multiple-Organ Segmentation Method Using Multiple U-Nets in PET/CT Images." Applied Sciences 13, no. 19 (September 27, 2023): 10765. http://dx.doi.org/10.3390/app131910765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
PET/CT can scan low-dose computed tomography (LDCT) images with morphological information and PET images with functional information. Because the whole body is targeted for imaging, PET/CT examinations are important in cancer diagnosis. However, the several images obtained by PET/CT place a heavy burden on radiologists during diagnosis. Thus, the development of computer-aided diagnosis (CAD) and technologies assisting in diagnosis has been requested. However, because FDG accumulation in PET images differs for each organ, recognizing organ regions is essential for developing lesion detection and analysis algorithms for PET/CT images. Therefore, we developed a method for automatically extracting organ regions from PET/CT images using U-Net or DenseUNet, which are deep-learning-based segmentation networks. The proposed method is a hybrid approach combining morphological and functional information obtained from LDCT and PET images. Moreover, pre-training using ImageNet and RadImageNet was performed and compared. The best extraction accuracy was obtained by pre-training ImageNet with Dice indices of 94.1, 93.9, 91.3, and 75.1% for the liver, kidney, spleen, and pancreas, respectively. This method obtained better extraction accuracy for low-quality PET/CT images than did existing studies on PET/CT images and was comparable to existing studies on diagnostic contrast-enhanced CT images using the hybrid method and pre-training.
6

Seiffert, Alexander P., Adolfo Gómez-Grande, Alberto Villarejo-Galende, Marta González-Sánchez, Héctor Bueno, Enrique J. Gómez, and Patricia Sánchez-González. "High Correlation of Static First-Minute-Frame (FMF) PET Imaging after 18F-Labeled Amyloid Tracer Injection with [18F]FDG PET Imaging." Sensors 21, no. 15 (July 30, 2021): 5182. http://dx.doi.org/10.3390/s21155182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dynamic early-phase PET images acquired with radiotracers binding to fibrillar amyloid-beta (Aβ) have shown to correlate with [18F]fluorodeoxyglucose (FDG) PET images and provide perfusion-like information. Perfusion information of static PET scans acquired during the first minute after radiotracer injection (FMF, first-minute-frame) is compared to [18F]FDG PET images. FMFs of 60 patients acquired with [18F]florbetapir (FBP), [18F]flutemetamol (FMM), and [18F]florbetaben (FBB) are compared to [18F]FDG PET images. Regional standardized uptake value ratios (SUVR) are directly compared and intrapatient Pearson’s correlation coefficients are calculated to evaluate the correlation of FMFs to their corresponding [18F]FDG PET images. Additionally, regional interpatient correlations are calculated. The intensity profiles of mean SUVRs among the study cohort (r = 0.98, p < 0.001) and intrapatient analyses show strong correlations between FMFs and [18F]FDG PET images (r = 0.93 ± 0.05). Regional VOI-based analyses also result in high correlation coefficients. The FMF shows similar information to the cerebral metabolic patterns obtained by [18F]FDG PET imaging. Therefore, it could be an alternative to the dynamic imaging of early phase amyloid PET and be used as an additional neurodegeneration biomarker in amyloid PET studies in routine clinical practice while being acquired at the same time as amyloid PET images.
7

Lee, Giljae, Hwunjae Lee, and Gyehwan Jin. "Analysis of Fitting Degree of MRI and PET Images in Simultaneous MRPET Images by Machine Learning Neural Networks." ScholarGen Publishers 3, no. 1 (December 28, 2020): 43–61. http://dx.doi.org/10.31916/sjmi2020-01-05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Simultaneous MR-PET imaging is a fusion of MRI using various parameters and PET images using various nuclides. In this paper, we performed analysis on the fitting degree between MRI and simultaneous MR-PET images and between PET and simultaneous MR-PET images. For the fitness analysis by neural network learning, feature parameters of experimental images were extracted by discrete wavelet transform (DWT), and the extracted parameters were used as input data to the neural network. In comparing the feature values extracted by DWT for each image, the horizontal and vertical low frequencies showed similar patterns, but the patterns were different in the horizontal and vertical high frequency and diagonal high frequency regions. In particular, the signal value was large in the T1 and T2 weighted images of MRI. Neural network learning results for fitting degree analysis were as follows. 1. T1-weighted MRI and simultaneous MR-PET image fitting degree: Regression (R) values were found to be Training 0.984, Validation 0.844, and Testing 0.886. 2. Dementia-PET image and Simultaneous MR-PET Image fitting degree: R values were found to be Training 0.970, Validation 0.803, and Testing 0.828. 3. T2-weighted MRI and concurrent MR-PET image fitting degree: R values were found to be Training 0.999, Validation 0.908, and Testing 0.766. 4. Brain tumor-PET image and Simultaneous MR-PET image fitting degree: R values were found to be Training 0.999, Validation 0.983, and Testing 0.876. An R value closer to 1 indicates more similarity. Therefore, each image fused in the simultaneous MR-PET images verified in this study was found to be similar. Ongoing study of images acquired with pulse sequences other than the weighted images in the MRI is needed. These studies may establish a useful protocol for the acquisition of simultaneous MR-PET images.
8

Couto, Pedro, Telmo Bento, Humberto Bustince, and Pedro Melo-Pinto. "Positron Emission Tomography Image Segmentation Based on Atanassov’s Intuitionistic Fuzzy Sets." Applied Sciences 12, no. 10 (May 11, 2022): 4865. http://dx.doi.org/10.3390/app12104865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we present an approach to fully automate tumor delineation in positron emission tomography (PET) images. PET images play a major role in medicine for in vivo imaging in oncology (PET images are used to evaluate oncology patients, detecting emitted photons from a radiotracer localized in abnormal cells). PET image tumor delineation plays a vital role both in pre- and post-treatment stages. The low spatial resolution and high noise characteristics of PET images increase the challenge in PET image segmentation. Despite the difficulties and known limitations, several image segmentation approaches have been proposed. This paper introduces a new unsupervised approach to perform tumor delineation in PET images using Atanassov’s intuitionistic fuzzy sets (A-IFSs) and restricted dissimilarity functions. Moreover, the implementation of this methodology is presented and tested against other existing methodologies. The proposed algorithm increases the accuracy of tumor delineation in PET images, and the experimental results show that the proposed method outperformed all methods tested.
9

Li, Hui, Chao Gao, Yingying Sun, Aojie Li, Wang Lei, Yuming Yang, Ting Guo, et al. "Radiomics Analysis to Enhance Precise Identification of Epidermal Growth Factor Receptor Mutation Based on Positron Emission Tomography Images of Lung Cancer Patients." Journal of Biomedical Nanotechnology 17, no. 4 (April 1, 2021): 691–702. http://dx.doi.org/10.1166/jbn.2021.3056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
How to recognize precisely epidermal growth factor receptor (EGFR) mutation in lung cancer patients owns great clinical requirement. In this study, 1575 radiomics features were extracted from PET images of 75 lung cancer patients based on contrast agents such as 18F-MPG and 18F-FDG. The Mann-Whitney U test was used for single factor analysis, the Least Absolute Shrinkage and Selection Operator (Lasso) Regression was used for feature screening, then the radiomics classification models were established by using support vector machines and ten-fold cross-validation, and were used to identify EGFR mutation in primary lung cancers and metastasis lung cancers, accuracy based on 18F-MPG PET images are respectively 90% for primary lung cancers, and 89.66% for metastasis lung cancers, accuracy based on 18F-FDG PET images are respectively 76% for primary lung cancers and 82.75% for metastasis lung cancers. The area under the curves (AUC) based on 18F-MPG PET images are respectively 0.94877 for primary lung cancers, and 0.91775 for metastasis lung cancers, AUC based on 18F-FDG PET images are respectively 0.87374 for primary lung cancers, and 0.82251 for metastasis lung cancers. In conclusion, both 18F-MPG PET images and 18F-FDG PET images combined with established classification models can identify EGFR mutation, but 18F-MPG PET images have more precisely than 18F-FDG PET images, own clinical translational prospects.
10

Petryakova, A. V., L. A. Chipiga, M. S. Tlostanova, A. A. Ivanova, D. A. Vazhenina, A. A. Stanzhevsky, D. V. Ryzhkova, et al. "Method of Experts’ Quality Evaluation of the PET Images of the Patients." MEDICAL RADIOLOGY AND RADIATION SAFETY 68, no. 1 (February 2023): 78–85. http://dx.doi.org/10.33266/1024-6177-2023-68-1-78-85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose: To develop the method of experts’ quality evaluation of the PET images as an additional quality control method for accurate, comparable, and reproducible PET diagnostics results, and to conduct image quality evaluation in different PET departments used this method. Material and methods: 60 PET images (without CT) of the patients who underwent whole body PET/CT with 18F-FDG were collected from 12 PET/CT scanners in 9 PET departments. Experts’ quality evaluation was conducted with questioning of the experts. Each expert evaluated the image quality by five-point scale and filled out the special form which include three image quality criteria: image clarity, artefacts, and general image quality. There were 28 experts from 8 different PET departments who have work experience in radiology from 1 to 32 years. The results of experts’ quality evaluation of the PET images were examined for correlations with parameters of acquisition and reconstruction protocols, examination methods. The results were also examined for dependance of subjective factors such as work experience and work conditions of experts. The minimum required number of experts were defined. The results were analyzed used statistical methods. Results: The PET images obtained by 8 PET/CT scanners had mean quality value more than 4 points (good quality). PET/CT scanners, which had the lowest quality value, have the obsolete or unusual settings and reconstruction parameters. The correlations between experts’ quality evaluation of the PET images and acquisition parameters (acquisition time per bed, multiplication of injected activity and acquisition time per bed), and examination methods (injected activity and uptake time) were established. The results of experts’ quality evaluation of the PET images were dependent on work experience and work conditions of experts. Conclusion: The method of experts’ quality evaluation of the PET images of the patients based on the questioning of the experts working in PET was developed and demonstrated in the current study. The results showed this method has the potential to compare the PET images obtained by different acquisition and reconstruction protocols, and it can be applied during the optimization of examination method and for the determination of obsolete and unusual settings of PET/CT. Experts’ evaluation of the PET images should include the opinion of at least six experts with different work experience in PET from several PET departments.
11

ODERO, D. O., J. R. HARTLEY, and D. S. SHIMM. "POSITRON EMISSION TOMOGRAPHY AND RADIATION THERAPY COMPUTERIZED TREATMENT PLANNING SYSTEMS." Journal of Mechanics in Medicine and Biology 08, no. 02 (June 2008): 235–50. http://dx.doi.org/10.1142/s0219519408002619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Imaging devices aid clinicians in disease staging and treatment planning procedures for cancer patients. In this technical report, the basic physics of positron emission tomography (PET) are briefly discussed. The integration of PET images into radiation therapy computerized treatment planning systems (TPS) for PET and computed tomography (CT) image fusion via portable storage media and Web-based PET image manipulation software is described. The PET images were obtained from a mobile dual PET/CT scanner (Discovery ST, GE Medical Systems) located two miles away from our radiation therapy center, and were fused with CT images from an independent stand-alone CT scanner (Siemens Somatom Emotion). Quality assurance for treatment planning PET images as well as radiation safety issues related to PET are also briefly discussed.
12

Friston, Karl J., Christopher D. Frith, Peter F. Liddle, and Richard S. J. Frackowiak. "Plastic Transformation of PET Images." Journal of Computer Assisted Tomography 15, no. 4 (July 1991): 634–39. http://dx.doi.org/10.1097/00004728-199107000-00020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Vizza, Patrizia, Pierangelo Veltri, and Giuseppe L. Cascini. "Statistical analysis of PET images." ACM SIGHIT Record 2, no. 1 (March 2012): 15. http://dx.doi.org/10.1145/2180796.2180807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tsotsos, John K. "Computation, PET images, and attention." Behavioral and Brain Sciences 18, no. 2 (June 1995): 372. http://dx.doi.org/10.1017/s0140525x00038978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractPosner & Raichle (1994) is a nice addition to the Scientific American Library and the average reader will both enjoy the book and learn a great deal. As an activeresearcher, however, I find the book disappointing in many respects. My two major disappointments are in the illusion of computation that is created throughout the volume and in the inadequate perspective of the presentation on visual attention.
15

Koyama, Masamichi, and Mitsuru Koizumi. "FDG-PET Images of Acrometastases." Clinical Nuclear Medicine 39, no. 3 (March 2014): 298–300. http://dx.doi.org/10.1097/rlu.0000000000000350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Üstündağ, D. "Recovering Images from PET Camera." Acta Physica Polonica A 132, no. 3-II (September 2017): 963–66. http://dx.doi.org/10.12693/aphyspola.132.963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Xiaolong Ouyang, W. H. Wong, V. E. Johnson, Xiaoping Hu, and Chin-Tu Chen. "Incorporation of correlated structural images in PET image reconstruction." IEEE Transactions on Medical Imaging 13, no. 4 (1994): 627–40. http://dx.doi.org/10.1109/42.363105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lai, Yung-Chi, Kuo-Chen Wu, Chao-Jen Chang, Yi-Jin Chen, Kuan-Pin Wang, Long-Bin Jeng, and Chia-Hung Kao. "Predicting Overall Survival with Deep Learning from 18F-FDG PET-CT Images in Patients with Hepatocellular Carcinoma before Liver Transplantation." Diagnostics 13, no. 5 (March 4, 2023): 981. http://dx.doi.org/10.3390/diagnostics13050981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Positron emission tomography and computed tomography with 18F-fluorodeoxyglucose (18F-FDG PET-CT) were used to predict outcomes after liver transplantation in patients with hepatocellular carcinoma (HCC). However, few approaches for prediction based on 18F-FDG PET-CT images that leverage automatic liver segmentation and deep learning were proposed. This study evaluated the performance of deep learning from 18F-FDG PET-CT images to predict overall survival in HCC patients before liver transplantation (LT). We retrospectively included 304 patients with HCC who underwent 18F-FDG PET/CT before LT between January 2010 and December 2016. The hepatic areas of 273 of the patients were segmented by software, while the other 31 were delineated manually. We analyzed the predictive value of the deep learning model from both FDG PET/CT images and CT images alone. The results of the developed prognostic model were obtained by combining FDG PET-CT images and combining FDG CT images (0.807 AUC vs. 0.743 AUC). The model based on FDG PET-CT images achieved somewhat better sensitivity than the model based on CT images alone (0.571 SEN vs. 0.432 SEN). Automatic liver segmentation from 18F-FDG PET-CT images is feasible and can be utilized to train deep-learning models. The proposed predictive tool can effectively determine prognosis (i.e., overall survival) and, thereby, select an optimal candidate of LT for patients with HCC.
19

Wongsa, Paramest, Witaya Sungkarat, and Supattana Auethavekiat. "Developing a PET normal brain template using diffusion tensor imaging images: A proof of concept." Journal of Associated Medical Sciences 56, no. 1 (January 3, 2023): 159–65. http://dx.doi.org/10.12982/jams.2023.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: Registered Positron emission tomography (PET) brain images to the standard normal PET brain templates can be performed to diagnosis dementia by using a vendor software, in which the brain template is based on T1-Weighted (T1W) images. However, the imperfection of an overlap between PET images and the PET-T1W based brain template could be observed. Objectives: This pilot study aimed to develop a new PET brain template and compare the accuracy of image registration between a conventional PET-T1W based brain template and our proposed PET-DTI based brain template. Materials and methods: The new PET-DTI based brain template was developed from twenty-four normal volunteers (age ranged 42-79 years old) who underwent 11C-Pittsburgh compound B PET scans and both T1W and diffusion tensor image (DTI) magnetic resonance imaging brain scans. The correction of Eddy-Current distortions and related artifact removing in DTI images were performed using the open-source FMRIB Software Library (FSL) to generate whole-brain probabilistic tractography maps (MRI-Probtract). MRI-Probtract map was then deformably registered and normalized to PET images, which were used for brain boundary guidance. The accuracy of image registration was assessed by applying the newly developed PET-DTI brain template to PET images of four mild cognitive impairment patients who underwent the same brain-scanning protocols. The accuracy of image registrations using the conventional PET-T1 and PET-DTI templates was evaluated qualitatively by three nuclear medicine physicians. Wilcoxon Signed Ranks test was used to compare registration scores of the two methods. Additionally, the dice similarly coefficient was obtained to quantitatively evaluate the accuracy of image registration. Results: The registration scores of the PET images registered with the PET-DTI template were significantly higher than the PET-T1 template at p-value < 0.05. This result is consistent with the dice similarly coefficient where the value of PET-DTI template was higher. Conclusion: Result of this pilot study showed that new PET-DTI brain template provides higher registration quality, suggesting the feasibility of using PET-DTI template in a clinical PET study of the brain.
20

Wongsa, Paramest, Witaya Sungkarat, and Supattana Auethavekiat. "Developing a PET normal brain template using diffusion tensor imaging images: A proof of concept." Journal of Associated Medical Sciences 56, no. 1 (January 4, 2023): 159–66. http://dx.doi.org/10.12982/jams.2023.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: Registered Positron emission tomography (PET) brain images to the standard normal PET brain templates can be performed to diagnosis dementia by using a vendor software, in which the brain template is based on T1-Weighted (T1W) images. However, the imperfection of an overlap between PET images and the PET-T1W based brain template could be observed. Objectives: This pilot study aimed to develop a new PET brain template and compare the accuracy of image registration between a conventional PET-T1W based brain template and our proposed PET-DTI based brain template. Materials and methods: The new PET-DTI based brain template was developed from twenty-four normal volunteers (age ranged 42-79 years old) who underwent 11C-Pittsburgh compound B PET scans and both T1W and diffusion tensor image (DTI) magnetic resonance imaging brain scans. The correction of Eddy-Current distortions and related artifact removing in DTI images were performed using the open-source FMRIB Software Library (FSL) to generate whole-brain probabilistic tractography maps (MRI-Probtract). MRI-Probtract map was then deformably registered and normalized to PET images, which were used for brain boundary guidance. The accuracy of image registration was assessed by applying the newly developed PET-DTI brain template to PET images of four mild cognitive impairment patients who underwent the same brain-scanning protocols. The accuracy of image registrations using the conventional PET-T1 and PET-DTI templates was evaluated qualitatively by three nuclear medicine physicians. Wilcoxon Signed Ranks test was used to compare registration scores of the two methods. Additionally, the dice similarly coefficient was obtained to quantitatively evaluate the accuracy of image registration. Results: The registration scores of the PET images registered with the PET-DTI template were significantly higher than the PET-T1 template at p-value < 0.05. This result is consistent with the dice similarly coefficient where the value of PET-DTI template was higher. Conclusion: Result of this pilot study showed that new PET-DTI brain template provides higher registration quality, suggesting the feasibility of using PET-DTI template in a clinical PET study of the brain.
21

Haneishi, Hideaki, Masayuki Kanai, Yoshitaka Tamai, Atsushi Sakohira, and Kazuyoshi Suga. "Registration and Summation of Respiratory-Gated or Breath-Hold PET Images Based on Deformation Estimation of Lung from CT Image." Computational and Mathematical Methods in Medicine 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/9713280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Lung motion due to respiration causes image degradation in medical imaging, especially in nuclear medicine which requires long acquisition times. We have developed a method for image correction between the respiratory-gated (RG) PET images in different respiration phases or breath-hold (BH) PET images in an inconsistent respiration phase. In the method, the RG or BH-PET images in different respiration phases are deformed under two criteria: similarity of the image intensity distribution and smoothness of the estimated motion vector field (MVF). However, only these criteria may cause unnatural motion estimation of lung. In this paper, assuming the use of a PET-CT scanner, we add another criterion that is the similarity for the motion direction estimated from inhalation and exhalation CT images. The proposed method was first applied to a numerical phantom XCAT with tumors and then applied to BH-PET image data for seven patients. The resultant tumor contrasts and the estimated motion vector fields were compared with those obtained by our previous method. Through those experiments we confirmed that the proposed method can provide an improved and more stable image quality for both RG and BH-PET images.
22

Rossi, Farli, and Ashrani Aizzuddin Abd Rahni. "Joint Segmentation Methods of Tumor Delineation in PET – CT Images: A Review." International Journal of Engineering & Technology 7, no. 3.32 (August 26, 2018): 137. http://dx.doi.org/10.14419/ijet.v7i3.32.18414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Segmentation is one of the crucial steps in applications of medical diagnosis. The accurate image segmentation method plays an important role in proper detection of disease, staging, diagnosis, radiotherapy treatment planning and monitoring. In the advances of image segmentation techniques, joint segmentation of PET-CT images has increasingly received much attention in the field of both clinic and image processing. PET - CT images have become a standard method for tumor delineation and cancer assessment. Due to low spatial resolution in PET and low contrast in CT images, automated segmentation of tumor in PET - CT images is a well-known puzzle task. This paper attempted to describe and review four innovative methods used in the joint segmentation of functional and anatomical PET - CT images for tumor delineation. For the basic knowledge, the state of the art image segmentation methods were briefly reviewed and fundamental of PET and CT images were briefly explained. Further, the specific characteristics and limitations of four joint segmentation methods were critically discussed.
23

Bagci, Ulas, Jayaram K. Udupa, Neil Mendhiratta, Brent Foster, Ziyue Xu, Jianhua Yao, Xinjian Chen, and Daniel J. Mollura. "Joint segmentation of anatomical and functional images: Applications in quantification of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images." Medical Image Analysis 17, no. 8 (December 2013): 929–45. http://dx.doi.org/10.1016/j.media.2013.05.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ghezzo, Samuele, Ilaria Neri, Paola Mapelli, Annarita Savi, Ana Maria Samanes Gajate, Giorgio Brembilla, Carolina Bezzi, et al. "[68Ga]Ga-PSMA and [68Ga]Ga-RM2 PET/MRI vs. Histopathological Images in Prostate Cancer: A New Workflow for Spatial Co-Registration." Bioengineering 10, no. 8 (August 11, 2023): 953. http://dx.doi.org/10.3390/bioengineering10080953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study proposed a new workflow for co-registering prostate PET images from a dual-tracer PET/MRI study with histopathological images of resected prostate specimens. The method aims to establish an accurate correspondence between PET/MRI findings and histology, facilitating a deeper understanding of PET tracer distribution and enabling advanced analyses like radiomics. To achieve this, images derived by three patients who underwent both [68Ga]Ga-PSMA and [68Ga]Ga-RM2 PET/MRI before radical prostatectomy were selected. After surgery, in the resected fresh specimens, fiducial markers visible on both histology and MR images were inserted. An ex vivo MRI of the prostate served as an intermediate step for co-registration between histological specimens and in vivo MRI examinations. The co-registration workflow involved five steps, ensuring alignment between histopathological images and PET/MRI data. The target registration error (TRE) was calculated to assess the precision of the co-registration. Furthermore, the DICE score was computed between the dominant intraprostatic tumor lesions delineated by the pathologist and the nuclear medicine physician. The TRE for the co-registration of histopathology and in vivo images was 1.59 mm, while the DICE score related to the site of increased intraprostatic uptake on [68Ga]Ga-PSMA and [68Ga]Ga-RM2 PET images was 0.54 and 0.75, respectively. This work shows an accurate co-registration method for histopathological and in vivo PET/MRI prostate examinations that allows the quantitative assessment of dual-tracer PET/MRI diagnostic accuracy at a millimetric scale. This approach may unveil radiotracer uptake mechanisms and identify new PET/MRI biomarkers, thus establishing the basis for precision medicine and future analyses, such as radiomics.
25

Wang, Rui, Jifeng Zhang, Dongxue Wang, Funing Yang, and Ping Li. "Clinical value of 18F-fluorodeoxyglucose positron emission tomography/computed tomography combined with computed tomography angiography in large-vessel vasculitis." Radiology of Infectious Diseases 10, no. 4 (December 2023): 148–59. http://dx.doi.org/10.4103/rid.rid-d-23-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract OBJECTIVES: The objectives of this study were to investigate the clinical application of 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG-PET/CT) combined with CT angiography (CTA) fusion images at diagnosis and assessment in large-vessel vasculitis (LVV). MATERIALS AND METHODS: Forty-six patients with LVV who underwent both 18F-FDG-PET/CT and CTA procedures were studied in the Second Hospital of Harbin Medical University from September 2019 to June 2022, and the clinical disease activity of patients was judged by the Physician Global Assessment. Clinical data, acute-phase reactants (APRs), and imaging data were collected. Meanwhile, the APRS must be obtained within 1 week of 18F-FDG-PET/CT. 18F-FDG-PET/CT was primarily used to evaluate LVV activity, while CTA was primarily used to observe morphological changes in arteries, including arterial wall thickening, narrowing, and corresponding complications. PET/CT images were evaluated by two nuclear medicine physicians, both of them unaware of the patients’ laboratory tests and clinical signs. Two nuclear medicine specialists evaluated the PET/CT images and PET/CTA images, who were blinded to the patients’ information. The concordance of two physicians in the LVV visual grading scale was studied by calculating the Cohen’s kappa index (k) which evaluates the power of concordance. The paired t-test was used to analyze the differences between PET/CTA images and PET/CT images. RESULTS: The sensitivity and specificity of the semi-quantitative analysis to assess LVV activity was 94.1% and 93.1%, respectively, when a cutoff of the mean SUVmax/SUVmeanliver of 1.15. It was found that the images obtained after delayed phase were clearer and the contrast between the arterial wall and the lumen was higher in 19 patients. We also concluded that PET/CTA examinations were able to detect more lesion sites compared to PET/CT examinations in 28 patients (P < 0.001), especially for patients with long-term treatment, and the interpretation of PET/CTA images took less time than PET/CT images(P < 0.001), ultimately achieving a shorter time, more comprehensive and accurate interpretation. CONCLUSION: Although 18F-FDG-PET/CT can assess the activity of LVV, it is poor at observing morphological changes in arteries. The use of 18F-FDG-PET/CTA imaging scans in LVV can accurately assess disease activity while at the same time providing a comprehensive, accurate, and efficient determination of disease severity, allowing patients to receive comprehensive diagnostic information from PET/CTA examination.
26

Wisenberg, G., J. D. Thiessen, W. Pavlovsky, J. Butler, B. Wilk, and F. S. Prato. "Same day comparison of PET/CT and PET/MR in patients with cardiac sarcoidosis." Journal of Nuclear Cardiology 27, no. 6 (January 2, 2019): 2118–29. http://dx.doi.org/10.1007/s12350-018-01578-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Background Inflammatory cardiac disorders, in particular, sarcoidosis, play an important role in left ventricular dysfunction, conduction abnormalities, and arrhythmias. In this study, we compared the imaging characteristics and diagnostic information obtained when patients were imaged sequentially with PET/CT and then with hybrid PET/MRI on the same day following a single 18F-FDG injection. Methods Ten patients with known or suspected sarcoidosis underwent imaging in sequence of (a) 99mTc-MIBI, (b) 18F-FDG with PET/CT, and (c) 18F-FDG with 3T PET/MRI. Images were compared quantitatively by determination of SUVmax and SUV on a voxel by voxel basis, and qualitatively by two experienced observers. Results When both platforms were compared quantitatively, similar data for the evaluation of enhanced 18F-FDG uptake were obtained. Qualitatively, there were (1) several instances of normal perfusion with delayed enhancement and/or focal 18F-FDG uptake, (2) comparable enhanced 18F-FDG uptake on PET/CT vs. PET/MRI, and (3) diversity in disease patterns with delayed enhancement only, increased 18F-FDG uptake only, or both. Conclusion In this limited patient study, PET/CT and PET/MR provided similar diagnostic data for 18F-FDG uptake, and the concurrent acquisition of MR images provided further insight into the disease process.
27

Lee, Min-Hee, Chang-Soo Yun, Kyuseok Kim, and Youngjin Lee. "Effect of Denoising and Deblurring 18F-Fluorodeoxyglucose Positron Emission Tomography Images on a Deep Learning Model’s Classification Performance for Alzheimer’s Disease." Metabolites 12, no. 3 (March 7, 2022): 231. http://dx.doi.org/10.3390/metabo12030231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Alzheimer’s disease (AD) is the most common progressive neurodegenerative disease. 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) is widely used to predict AD using a deep learning model. However, the effects of noise and blurring on 18F-FDG PET images were not considered. The performance of a classification model trained using raw, deblurred (by the fast total variation deblurring method), or denoised (by the median modified Wiener filter) 18F-FDG PET images without or with cropping around the limbic system area using a 3D deep convolutional neural network was investigated. The classification model trained using denoised whole-brain 18F-FDG PET images achieved classification performance (0.75/0.65/0.79/0.39 for sensitivity/specificity/F1-score/Matthews correlation coefficient (MCC), respectively) higher than that with raw and deblurred 18F-FDG PET images. The classification model trained using cropped raw 18F-FDG PET images achieved higher performance (0.78/0.63/0.81/0.40 for sensitivity/specificity/F1-score/MCC) than the whole-brain 18F-FDG PET images (0.72/0.32/0.71/0.10 for sensitivity/specificity/F1-score/MCC, respectively). The 18F-FDG PET image deblurring and cropping (0.89/0.67/0.88/0.57 for sensitivity/specificity/F1-score/MCC) procedures were the most helpful for improving performance. For this model, the right middle frontal, middle temporal, insula, and hippocampus areas were the most predictive of AD using the class activation map. Our findings demonstrate that 18F-FDG PET image preprocessing and cropping improves the explainability and potential clinical applicability of deep learning models.
28

Hu, Zhanli, Yongchang Li, Sijuan Zou, Hengzhi Xue, Ziru Sang, Xin Liu, Yongfeng Yang, Xiaohua Zhu, Dong Liang, and Hairong Zheng. "Obtaining PET/CT images from non-attenuation corrected PET images in a single PET system using Wasserstein generative adversarial networks." Physics in Medicine & Biology 65, no. 21 (November 3, 2020): 215010. http://dx.doi.org/10.1088/1361-6560/aba5e9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Pang, Wenbo, Siqi Li, Huiyan Jiang, and Yu-dong Yao. "MTR-PET: Multi-temporal resolution PET images for lymphoma segmentation." Biomedical Signal Processing and Control 87 (January 2024): 105529. http://dx.doi.org/10.1016/j.bspc.2023.105529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Ning, Lingjie Wang, Yixing Yu, Guangzheng Li, Changhao Cao, Rui Xu, Bin Jiang, et al. "An Assessment of the Pathological Classification and Postoperative Outcome of Focal Cortical Dysplasia by Simultaneous Hybrid PET/MRI." Brain Sciences 13, no. 4 (April 4, 2023): 611. http://dx.doi.org/10.3390/brainsci13040611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objectives: The purpose of this research was to investigate whether MRI and Simultaneous Hybrid PET/MRI images were consistent in the histological classification of patients with focal cortical dysplasia. Additionally, this research aimed to evaluate the postoperative outcomes with the MRI and Simultaneous Hybrid PET/MRI images of focal cortical dysplasia. Methods: A total of 69 cases in this research were evaluated preoperatively for drug-resistant seizures, and then surgical resection procedures of the epileptogenic foci were performed. The postoperative result was histopathologically confirmed as focal cortical dysplasia, and patients then underwent PET and MRI imaging within one month of the seizure. In this study, head MRI was performed using a 3.0 T magnetic resonance scanner (Philips) to obtain 3D T1WI images. The Siemens Biograph 16 scanner was used for a routine scanning of the head to obtain PET images. BrainLAB’s iPlan software was used to fuse 3D T1 images with PET images to obtain PET/MRI images. Results: Focal cortical dysplasia was divided into three types according to ILAE: three patients were classified as type I, twenty-five patients as type II, and forty-one patients as type III. Patients age of onset under 18 and age of operation over 18 had a longer duration (p = 0.036, p = 0.021). MRI had a high lesion detection sensitivity of type III focal cortical dysplasia (p = 0.003). Simultaneous Hybrid PET/MRI showed high sensitivity in detecting type II and III focal cortical dysplasia lesions (p = 0.037). The lesions in Simultaneous Hybrid PET/MRI-positive focal cortical dysplasia patients were mostly located in the temporal and multilobar (p = 0.005, 0.040). Conclusion: Simultaneous Hybrid PET/MRI has a high accuracy in detecting the classification of focal cortical dysplasia. The results of this study indicate that patients with focal cortical dysplasia with positive Simultaneous Hybrid PET/MRI have better postoperative prognoses.
31

Ferrando, Ornella, Franca Foppiano, Tindaro Scolaro, Chiara Gaeta, and Andrea Ciarmiello. "PET/CT images quantification for diagnostics and radiotherapy applications." Journal of Diagnostic Imaging in Therapy 2, no. 1 (February 16, 2015): 18–29. http://dx.doi.org/10.17229/jdit.2015-0216-013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wei, Qi, and Qi Liu. "Denoise PET Images Based on a Combining Method of EMD and ICA." Advanced Materials Research 981 (July 2014): 340–43. http://dx.doi.org/10.4028/www.scientific.net/amr.981.340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The incidental component in addition to the measured target signals is considered as noise of Positron Emission Tomography (PET) images. A novel method to denoise the PET images based on Empirical Mode Decomposition (EMD) and Independent Component Analysis (ICA) associated with Sparse Code Shrinkage (SCS) technique is proposed in this paper. EMD is executed to decompose a PET image into a number of Intrinsic Mode Functions (IMFs), which are used to reconstruct a new PET image after chosen by means of an inverse EMD procedure. By applying ICA to the new PET image, an orthogonal dataset can be obtained and the signal-noise separation can be realized. Then a clearer PET image can be reconstructed by SCS. The simulation results indicate that the proposed method is effective to denoise PET images.
33

Dawood, M., N. Lang, F. Büther, M. Schäfers, O. Schober, and K. P. Schäfers. "Motion correction in PET/CT." Nuklearmedizin 44, S 01 (2005): S46—S50. http://dx.doi.org/10.1055/s-0038-1625215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Summary:Motion in PET/CT leads to artifacts in the reconstructed PET images due to the different acquisition times of positron emission tomography and computed tomography. The effect of motion on cardiac PET/CT images is evaluated in this study and a novel approach for motion correction based on optical flow methods is outlined. The Lukas-Kanade optical flow algorithm is used to calculate the motion vector field on both simulated phantom data as well as measured human PET data. The motion of the myocardium is corrected by non-linear registration techniques and results are compared to uncorrected images.
34

Kang, Seung-Kwan, Si-Young Yie, and Jae-Sung Lee. "Noise2Noise Improved by Trainable Wavelet Coefficients for PET Denoising." Electronics 10, no. 13 (June 24, 2021): 1529. http://dx.doi.org/10.3390/electronics10131529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The significant statistical noise and limited spatial resolution of positron emission tomography (PET) data in sinogram space results in the degradation of the quality and accuracy of reconstructed images. Although high-dose radiotracers and long acquisition times improve the PET image quality, the patients’ radiation exposure increases and the patient is more likely to move during the PET scan. Recently, various data-driven techniques based on supervised deep neural network learning have made remarkable progress in reducing noise in images. However, these conventional techniques require clean target images that are of limited availability for PET denoising. Therefore, in this study, we utilized the Noise2Noise framework, which requires only noisy image pairs for network training, to reduce the noise in the PET images. A trainable wavelet transform was proposed to improve the performance of the network. The proposed network was fed wavelet-decomposed images consisting of low- and high-pass components. The inverse wavelet transforms of the network output produced denoised images. The proposed Noise2Noise filter with wavelet transforms outperforms the original Noise2Noise method in the suppression of artefacts and preservation of abnormal uptakes. The quantitative analysis of the simulated PET uptake confirms the improved performance of the proposed method compared with the original Noise2Noise technique. In the clinical data, 10 s images filtered with Noise2Noise are virtually equivalent to 300 s images filtered with a 6 mm Gaussian filter. The incorporation of wavelet transforms in Noise2Noise network training results in the improvement of the image contrast. In conclusion, the performance of Noise2Noise filtering for PET images was improved by incorporating the trainable wavelet transform in the self-supervised deep learning framework.
35

Filipovic, Marina, Eric Barat, Thomas Dautremer, Claude Comtat, and Simon Stute. "PET Reconstruction of the Posterior Image Probability, Including Multimodal Images." IEEE Transactions on Medical Imaging 38, no. 7 (July 2019): 1643–54. http://dx.doi.org/10.1109/tmi.2018.2886050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Farquhar, T. H., G. Chinn, C. K. Hoh, S. C. Huang, and E. J. Hoffman. "A nonlinear, image domain filtering method for cardiac PET images." IEEE Transactions on Nuclear Science 45, no. 4 (1998): 2073–79. http://dx.doi.org/10.1109/23.708300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kapur, Narinder. "Looking for images of memory." Behavioral and Brain Sciences 18, no. 2 (June 1995): 364–65. http://dx.doi.org/10.1017/s0140525x00038887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThis is an excellent book but it lacks a detailed presentation and formulation of images of memory. Positron emission tomography (PET) findings sometimes raise more enigmatic questions than they answer, with differences between studies and differences with established lesion evidence. Perhaps the book could have been more critical in its analysis of these enigmas, covering more of the basic issues and assumptions underlying PET research.
38

Xiang, Z. "PET/CT fusion in radiotherapy treatment planning for head and neck cancer." Journal of Clinical Oncology 27, no. 15_suppl (May 20, 2009): e17046-e17046. http://dx.doi.org/10.1200/jco.2009.27.15_suppl.e17046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
e17046 Background: Positron emission tomography/computerized emission tomography (PET/CT) creates fusion images which are a combination of tissue function (PET) and anatomy (CT). It is playing an increasingly important role in radiotherapy of cancer. Aim of this report is to investigate the value of PET/CT fusion in radiotherapy treatment planning for head and neck cancer. Methods: 17 patients with head and neck cancer underwent 18F-FDG PET/CT imaging. The primary lesions were all proved by pathology. PET/CT fusion images, PET images and CT images of the same patient were analyzed frame by frame. TNM stage was analyzed based on PET/CT and CT. PET/CT-GTV and CT-GTV volume were analyzed. Results: 59 malignant lesions were detected by PET/CT in the 17 patients. Among 59 lesions, 31 lesions were detected and displayed definitely by both PET and the plain scan of CT; 23 lesions were detected definitely by PET but not by the plain scan of CT; while 5 lesions were negative by PET but definitely by the plain scan of CT. The sensitivity of PET/CT was higher than PET and the plain scan of CT alone. Changes in TNM stage occurred in 7 patients (41%), based on PET/CT. The median PET/CT-GTV and CT-GTV volume was 84.3 cm3 (range, 46∼364 cm3) and 116.2 cm3 (range, 58∼472 cm3), respectively, showed significant differences (p = 0.0005). Conclusions: PET/CT can increase the accuracy of the staging and defining target volumes for radiation therapy fields for head and neck cancer. No significant financial relationships to disclose.
39

Le, Quoc Cuong, Hidetaka Arimura, Kenta Ninomiya, Takumi Kodama, and Tetsuhiro Moriyama. "Can Persistent Homology Features Capture More Intrinsic Information about Tumors from 18F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Images of Head and Neck Cancer Patients?" Metabolites 12, no. 10 (October 14, 2022): 972. http://dx.doi.org/10.3390/metabo12100972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study hypothesized that persistent homology (PH) features could capture more intrinsic information about the metabolism and morphology of tumors from 18F-fluorodeoxyglucose positron emission tomography (PET)/computed tomography (CT) images of patients with head and neck (HN) cancer than other conventional features. PET/CT images and clinical variables of 207 patients were selected from the publicly available dataset of the Cancer Imaging Archive. PH images were generated from persistent diagrams obtained from PET/CT images. The PH features were derived from the PH PET/CT images. The signatures were constructed in a training cohort from features from CT, PET, PH-CT, and PH-PET images; clinical variables; and the combination of features and clinical variables. Signatures were evaluated using statistically significant differences (p-value, log-rank test) between survival curves for low- and high-risk groups and the C-index. In an independent test cohort, the signature consisting of PH-PET features and clinical variables exhibited the lowest log-rank p-value of 3.30 × 10−5 and C-index of 0.80, compared with log-rank p-values from 3.52 × 10−2 to 1.15 × 10−4 and C-indices from 0.34 to 0.79 for other signatures. This result suggests that PH features can capture the intrinsic information of tumors and predict prognosis in patients with HN cancer.
40

Seitz, R. J., C. Bohm, T. Greitz, P. E. Roland, L. Eriksson, G. Blomqvist, G. Rosenqvist, and B. Nordell. "Accuracy and Precision of the Computerized Brain Atlas Programme for Localization and Quantification in Positron Emission Tomography." Journal of Cerebral Blood Flow & Metabolism 10, no. 4 (July 1990): 443–57. http://dx.doi.org/10.1038/jcbfm.1990.87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The computerized brain atlas programme (CBA) provides a powerful tool for the anatomical analysis of functional images obtained with positron emission tomography (PET). With a repertoire of simple transformations, the data base of the CBA is first adapted to the anatomy of the subject's brain represented as a set of magnetic resonance (MR) or computed tomography (CT) images. After this, it is possible to spatially standardize (reformat) any set of tomographic images related to the subject, PET images, as well as CT and MR images, by applying the inverse atlas transformations. From these reformatted images, statistical images, such as average images and associated error images corresponding to different groups of subjects, may be produced. In all these images, anatomical structures can be localized using the atlas data base and the functional values can be evaluated quantitatively. The purpose of this study was to determine the spatial and quantitative accuracy and precision of the calculated regional mean values. Therefore, the CBA was applied to regional CBF (rCBF) measurements with [11C]fluoromethane and PET on 26 healthy male volunteers during rest and during three different physiological stimulation tasks. First, the spatial accuracy and precision of the reformation process were determined by measuring the spread of defined anatomical structures in the reformatted MR images of the subjects. Second, the mean global CBF and the mean rCBF in the average PET images were compared with the global CBF and rCBF in the original PET images. Our results demonstrate that the reformation process accurately transformed the individual brains of the subjects into the standard brain anatomy of the CBA. The precision of the reformation process had an SD of ∼1 mm for the lateral dislocation of midline structures and ∼2–3 mm for the dislocation of the inner and outer brain surfaces. The quantitative rCBF values of the original PET images were accurately represented in the reformatted PET images. Moreover, this study shows that the application of the CBA improves the analysis of functional PET images: (a) The average PET images had a low background noise [0.4 ml/100 g/min ± 0.7 (SD)] compared to the mean rCBF changes specifically induced by physiological stimulation. (b) The reformatted PET images had a voxel volume of 10.9 mm3. Owing to this high sampling resolution, it was possible to differentiate the mean rCBF changes in adjacent activated fields such as the left motor hand area from the sensory hand area and the left premotor cortex. (c) By calculating the relation of the mean rCBF change to the SEM of the mean rCBF change on a pixel-by-pixel basis, areas with significant rCBF changes could be determined. By use of the CBA, it was found that there was a high intersubject consistency in location of stimulation-induced rCBF changes. Furthermore, the rCBF changes in specifically stimulated areas were of similar magnitude among the subjects. It was shown that the stimulation-induced mean rCBF increases may be accompanied by mean rCBF decreases in other areas.
41

Huang, Xinrui, Yun Zhou, Shangliang Bao, and Sung-Cheng Huang. "Clustering-Based Linear Least Square Fitting Method for Generation of Parametric Images in Dynamic FDG PET Studies." International Journal of Biomedical Imaging 2007 (2007): 1–8. http://dx.doi.org/10.1155/2007/65641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Parametric images generated from dynamic positron emission tomography (PET) studies are useful for presenting functional/biological information in the 3-dimensional space, but usually suffer from their high sensitivity to image noise. To improve the quality of these images, we proposed in this study a modified linear least square (LLS) fitting method named cLLS that incorporates a clustering-based spatial constraint for generation of parametric images from dynamic PET data of high noise levels. In this method, the combination of K-means and hierarchical cluster analysis was used to classify dynamic PET data. Compared with conventional LLS, cLLS can achieve high statistical reliability in the generated parametric images without incurring a high computational burden. The effectiveness of the method was demonstrated both with computer simulation and with a human brain dynamic FDG PET study. The cLLS method is expected to be useful for generation of parametric images from dynamic FDG PET study.
42

Lindgren Belal, Sarah, May Sadik, Reza Kaboteh, Nezar Hasani, Olof Enqvist, Linus Svärm, Fredrik Kahl, et al. "Association of PET index quantifying skeletal uptake in NaF PET/CT images with overall survival in prostate cancer patients." Journal of Clinical Oncology 35, no. 6_suppl (February 20, 2017): 178. http://dx.doi.org/10.1200/jco.2017.35.6_suppl.178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
178 Background: Bone Scan Index (BSI) derived from 2D whole-body bone scans is considered an imaging biomarker of bone metastases burden carrying prognostic information. Sodium fluoride (NaF) PET/CT is more sensitive than bone scan in detecting bone changes due to metastases. We aimed to develop a semi-quantitative PET index similar to the BSI for NaF PET/CT imaging and to study its relationship to BSI and overall survival in patients with prostate cancer. Methods: NaF PET/CT and bone scans were analyzed in 48 patients (aged 53-92 years) with prostate cancer. Thoracic and lumbar spines, sacrum, pelvis, ribs, scapulae, clavicles, and sternum were automatically segmented from the CT images, representing approximately 1/3 of the total skeletal volume. Hotspots in the PET images, within the segmented parts in the CT images, were visually classified and hotspots interpreted as metastases were included in the analysis. The PET index was defined as the quotient obtained as the hotspot volume from the PET images divided by the segmented bone tissue volume from the CT images. BSI was automatically calculated using EXINIboneBSI. Results: The correlation between the PET index and BSI was r2= 0.54. The median BSI was 0.39 (IQR 0.08-2.05). The patients with a BSI ≥ 0.39 had a significantly shorter median survival time than patients with a BSI < 0.39 (2.3 years vs. not reached after 5 years). BSI was significantly associated with overall survival (HR 1.13, 95% CI 1.13 to 1.41; p < 0.001), and the C-index was 0.68. The median PET index was 0.53 (IQR 0.02-2.62). The patients with a PET index ≥ 0.53 had a significantly shorter median survival time than patients with a PET index < 0.53 (2.5 years vs. not reached after 5 years). The PET index was significantly associated with overall survival (HR 1.18, 95% CI 1.01 to 1.30; p < 0.001) and C-index was 0.68. Conclusions: PET index based on NaF PET/CT images was correlated to BSI and significantly associated with overall survival in patients with prostate cancer. Further studies are needed to evaluate the clinical value of this novel 3D PET index as a possible future imaging biomarker.
43

Song, Tzu-An, Fan Yang, and Joyita Dutta. "Noise2Void: unsupervised denoising of PET images." Physics in Medicine & Biology 66, no. 21 (November 1, 2021): 214002. http://dx.doi.org/10.1088/1361-6560/ac30a0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Vega-González, Ivan F., Ernesto Roldán-Valadez, and Guillermo Valdiviezo-Cárdenas. "Fused PET/CT Images in Hepatocarcinoma." Annals of Hepatology 5, no. 3 (July 2006): 164–65. http://dx.doi.org/10.1016/s1665-2681(19)32001-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Mykkänen, Jouni M., Martti Juhola, and Ulla Ruotsalainen. "Extracting VOIs from brain PET images." International Journal of Medical Informatics 58-59 (September 2000): 51–57. http://dx.doi.org/10.1016/s1386-5056(00)00075-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Meyer, J. H., R. N. Gunn, R. Myers, and P. M. Grasby. "Spatial Normalization of PET Ligand Images." NeuroImage 7, no. 4 (May 1998): A27. http://dx.doi.org/10.1016/s1053-8119(18)31896-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hsu, Chih-Yu, Chun-You Liu, and Chung-Ming Chen. "Automatic segmentation of liver PET images." Computerized Medical Imaging and Graphics 32, no. 7 (October 2008): 601–10. http://dx.doi.org/10.1016/j.compmedimag.2008.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Gershon, N. D. "Visualization Blackboard-visualizing 3D PET images." IEEE Computer Graphics and Applications 11, no. 5 (September 1991): 11–13. http://dx.doi.org/10.1109/38.90562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Jaakkola, Maria K., Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S. Helin, Tuuli A. Nissinen, et al. "Segmentation of Dynamic Total-Body [18F]-FDG PET Images Using Unsupervised Clustering." International Journal of Biomedical Imaging 2023 (December 5, 2023): 1–13. http://dx.doi.org/10.1155/2023/3819587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, k -means and Gaussian mixture model (GMM), for further analyses. We combined k -means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [18F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with k -means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making k -means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.
50

Ye, Shiping, Chaoxiang Chen, Zhican Bai, Jinming Wang, Xiaoxaio Yao, and Olga Nedzvedz. "Intelligent Labeling of Tumor Lesions Based on Positron Emission Tomography/Computed Tomography." Sensors 22, no. 14 (July 10, 2022): 5171. http://dx.doi.org/10.3390/s22145171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Positron emission tomography/computed tomography (PET/CT) plays a vital role in diagnosing tumors. However, PET/CT imaging relies primarily on manual interpretation and labeling by medical professionals. An enormous workload will affect the training samples’ construction for deep learning. The labeling of tumor lesions in PET/CT images involves the intersection of computer graphics and medicine, such as registration, a fusion of medical images, and labeling of lesions. This paper extends the linear interpolation, enhances it in a specific area of the PET image, and uses the outer frame scaling of the PET/CT image and the least-squares residual affine method. The PET and CT images are subjected to wavelet transformation and then synthesized in proportion to form a PET/CT fusion image. According to the absorption of 18F-FDG (fluoro deoxy glucose) SUV in the PET image, the professionals randomly select a point in the focus area in the fusion image, and the system will automatically select the seed point of the focus area to delineate the tumor focus with the regional growth method. Finally, the focus delineated on the PET and CT fusion images is automatically mapped to CT images in the form of polygons, and rectangular segmentation and labeling are formed. This study took the actual PET/CT of patients with lymphatic cancer as an example. The semiautomatic labeling of the system and the manual labeling of imaging specialists were compared and verified. The recognition rate was 93.35%, and the misjudgment rate was 6.52%.

To the bibliography