Добірка наукової літератури з теми "Non-rigid alignment errors"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Non-rigid alignment errors".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Non-rigid alignment errors"

1

Römer, W., T. Hothorn, A. Pfahlberg, J. Hornegger, W. Bautz, T. Kuwert, and A. Nömayr. "Anatomical accuracy of lesion localization." Nuklearmedizin 44, no. 04 (2005): 149–55. http://dx.doi.org/10.1055/s-0038-1625102.

Повний текст джерела
Анотація:
SummaryThe aim of this study was to evaluate the anatomical accuracy and reproducibility of retrospective interactive rigid image registration (RIR) between routinely archived X-ray computer tomography (CT) and positron emission tomography performed with 18F-deoxyglucose (FDG-PET) in oncological patients. Methods: Two observers registered PET and CT data obtained in 37 patients using a commercially available image fusion tool. RIR was performed separately for the thorax and the abdomen using physiological FDG uptake in several organs as a reference. One observer performed the procedure twice (O1a and O1b), another person once (O2). For 94 malignant lesions, clearly visible in CT and PET, the signed and absolute distances between their representation on PET and CT were measured in X-, Y-, and Z-direction with reference to a coordinate system centered in the CT representation of each lesion (X-, Y-, Z-distances). Results: The mean differences of the signed and absolute distances between O1a, O1b, and O2 did not exceed 3 mm in any dimension. The absolute X-, Y-, and Z-distances ranged between 0.57 ± 0.58 cm for O1a (X-direction) and 1.12 ± 1.28 cm for O2 (Z-direction). When averaging the absolute distances measured by O1a, O1b, and O2, the percentage of lesions misregistered by less than 1.5 cm was 91 % for the X-, 88 % for the Y-, and 77 % for the Z-direction. The larger error of fusion determined for the remaining lesions was caused by non-rigid body transformations due to differences in breathing, arm position, or bowel movements between the two examinations. Mixed effects analysis of the signed and absolute X-, Y-, and Z-distances disclosed a significantly greater misalignment in the thorax than in the abdomen as well as axially than transaxially. Conclusion: The anatomical inaccuracy of RIR can be expected to be <1.5 cm for the majority of neoplastic foci. Errors of alignment are bigger in the thorax and in Z-direction, due to non-rigid body transformations caused, e.g., by breathing.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Venet, Ludovic, Sarthak Pati, Michael D. Feldman, MacLean P. Nasrallah, Paul Yushkevich, and Spyridon Bakas. "Accurate and Robust Alignment of Differently Stained Histologic Images Based on Greedy Diffeomorphic Registration." Applied Sciences 11, no. 4 (February 21, 2021): 1892. http://dx.doi.org/10.3390/app11041892.

Повний текст джерела
Анотація:
Histopathologic assessment routinely provides rich microscopic information about tissue structure and disease process. However, the sections used are very thin, and essentially capture only 2D representations of a certain tissue sample. Accurate and robust alignment of sequentially cut 2D slices should contribute to more comprehensive assessment accounting for surrounding 3D information. Towards this end, we here propose a two-step diffeomorphic registration approach that aligns differently stained histology slides to each other, starting with an initial affine step followed by estimating a deformation field. It was quantitatively evaluated on ample (n = 481) and diverse data from the automatic non-rigid histological image registration challenge, where it was awarded the second rank. The obtained results demonstrate the ability of the proposed approach to robustly (average robustness = 0.9898) and accurately (average relative target registration error = 0.2%) align differently stained histology slices of various anatomical sites while maintaining reasonable computational efficiency (<1 min per registration). The method was developed by adapting a general-purpose registration algorithm designed for 3D radiographic scans and achieved consistently accurate results for aligning high-resolution 2D histologic images. Accurate alignment of histologic images can contribute to a better understanding of the spatial arrangement and growth patterns of cells, vessels, matrix, nerves, and immune cell interactions.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gatenbee, Chandler Dean, Ann-Marie Baker, Sandhya Prabhakaran, Mark Robertson-Tessi, Trevor Graham, and Alexander R. Anderson. "Abstract 2078: VALIS: Virtual Alignment of pathoLogy Image Series for multi-gigapixel whole slide images." Cancer Research 83, no. 7_Supplement (April 4, 2023): 2078. http://dx.doi.org/10.1158/1538-7445.am2023-2078.

Повний текст джерела
Анотація:
Abstract Interest in spatial omics is on the rise, but generation of highly multiplexed images used in many spatial analyses remains challenging, due to cost, expertise, methodical constraints, and/or access to technology. An alternative to performing highly multiplexed staining is to register collections of whole slide images (WSI), creating a collection of aligned images that can undergo spatial analyses. However, registration of WSI is two part problem, with the first being the alignment itself, and the second being the application of the transformations to huge multi-gigapixel images. To address both challenges, we have developed the Virtual Alignment of pathoLogy Image Series (VALIS) software, which enables one to rapidly and easily generate highly multiplexed images by aligning (registering) any number of multi-gigapixel whole slide images (WSI) stained using immunohistochemistry (IHC) and/or immunofluorescence (IF). Benchmarking using the publicly available 2019 ANHIR and 2022 ACROBAT Grand Challenge datasets indicates that VALIS provides state of the art accuracy, being one of the most accurate publicly available methods in the ANHIR challenge, and the most accurate opensource method in the ACROBAT challenge. VALIS is able to read, register, and save multi-gigapixel images as ome.tiff, thereby addressing the second challenge. In addition to the benchmarking datasets, the generalizability of VALIS has been tested with 273 IHC samples and 340 IF samples, each of which contained between 2-69 images per sample. In total, VALIS has therefore been tested with 5,138 images. The registered WSI tend to have low error and are completed within a matter of minutes. VALIS is written in Python, requires only few lines of code for execution, is readily available and fully documented. VALIS therefore provides a free, opensource, flexible, scalable, robust, and easy to use pipeline for rigid and non-rigid registration of multi-gigapixel WSI, facilitating spatial analyses of prospective and existing datasets,breathing new life into the countless collections of brightfield and immunofluorescence images. Citation Format: Chandler Dean Gatenbee, Ann-Marie Baker, Sandhya Prabhakaran, Mark Robertson-Tessi, Trevor Graham, Alexander R. Anderson. VALIS: Virtual Alignment of pathoLogy Image Series for multi-gigapixel whole slide images [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2023; Part 1 (Regular and Invited Abstracts); 2023 Apr 14-19; Orlando, FL. Philadelphia (PA): AACR; Cancer Res 2023;83(7_Suppl):Abstract nr 2078.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ipsen, Svenja, Ralf Bruder, Esben Schjødt Worm, Rune Hansen, Per Rugaard Poulsen, Morten Høyer, and Achim Schweikard. "Simultaneous acquisition of 4D ultrasound and wireless electromagnetic tracking for in-vivo accuracy validation." Current Directions in Biomedical Engineering 3, no. 2 (September 7, 2017): 75–78. http://dx.doi.org/10.1515/cdbme-2017-0016.

Повний текст джерела
Анотація:
AbstractUltrasound is being increasingly investigated for real-time target localization in image-guided interventions. Yet, in-vivo validation remains challenging due to the difficulty to obtain a reliable ground truth. For this purpose, real-time volumetric (4D) ultrasound imaging was performed simultaneously with electromagnetic localization of three wireless transponders implanted in the liver of a radiotherapy patient. 4D ultrasound and electromagnetic tracking were acquired at framerates of 12Hz and 8Hz, respectively, during free breathing over 8 min following treatment. The electromagnetic antenna was placed directly above and the ultrasound probe on the right side of the patient to visualize the liver transponders. It was possible to record 25.7 s of overlapping ultrasound and electromagnetic position data of one transponder. Good spatial alignment with 0.6 mm 3D root-mean-square error between both traces was achieved using a rigid landmark transform. However, data acquisition was impaired since the electromagnetic tracking highly influenced the ultrasound equipment and vice versa. High intensity noise streaks appeared in the ultrasound scan lines irrespective of the chosen frequency (1.7-3.3 MHz, 2/4 MHz harmonic). To allow for target visualization and tracking in the ultrasound volumes despite the artefacts, an online filter was designed where corrupted pixels in the newest ultrasound frame were replaced with non-corrupted pixels from preceding frames. Aside from these artefacts, the recorded electromagnetic tracking data was fragmented and only the transponder closest to the antenna could be detected over a limited period of six consecutive breathing cycles. This problem was most likely caused by interference from the metal holder of the ultrasound probe and was solved in a subsequent experiment using a 3D-printed non-metal probe fixation. Real-time wireless electromagnetic tracking was compared with 4D ultrasound imaging in-vivo for the first time. For stable tracking, large metal components need to be avoided during data acquisition and ultrasound filtering is required.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rios-Munoz, G., C. Perez-Hernandez, F. Fernandez-Aviles, and A. Arenal. "Atrial chamber colocalization for multiple 3D imaging techniques in atrial fibrillation." European Heart Journal 42, Supplement_1 (October 1, 2021). http://dx.doi.org/10.1093/eurheartj/ehab724.0486.

Повний текст джерела
Анотація:
Abstract Introduction There exist many imaging techniques and systems to reproduce atrial chambers in 3D. These technologies include electroanatomical (EA) mapping systems, noninvasive electrocardiographic imaging (ECGI), magnetic resonance imaging (MRI), or computed tomography (CT) scans. In the case of atrial fibrillation (AF), the most employed non-pharmacological treatment is catheter ablation to electrically isolate the pulmonary veins from the rest of the left atrium. Driver mechanisms such as focal or rotational activity have been proposed as possible initiating and maintaining mechanisms of AF. However, correspondence and validation of these sites when several systems are employed in the same patient remains a challenge, as they are mostly manually aligned based on visual inspection. Purpose To develop an automatic 3D alignment algorithm for cardiac 3D meshes to colocalize points between atrial maps generated with multiple EA mapping systems, ECGI, MRI, or CT scans. Methods A total of 25 left atrial meshes from persistent AF patients were exported from an EA mapping system. The total number of vertices for all the meshes was 2545444 points (101817.8±13593.3 points per map). A reference mesh was employed with minor modifications [1]. All meshes were manually segmented into 12 different left atrial regions, see Table for the region names. The method implements a non-rigid variant of the iterative closest point algorithm to transform the atrial mesh onto the reference one, see Figure. The geographical distance between the mean position of the 12 different segmented reference areas and the 12 transformed points was employed as the performance metric. Results The global error for all the fiducial points in all left atrial meshes was 11.57±2.55 mm. The average local errors for the 12 atrial areas are summarized in the Table. The best three aligned areas were the RSPV, atrial septum, and lateral wall. The areas with less alignment accuracy were the LAA, LSPV, and atrial roof. Conclusions The algorithm provides a promising solution to evaluate and validate site-related results from different systems, e.g., rotational activity presence between EA mapping and ECGI systems. The method works automatically for any given chamber anatomy or any number of points. No prior segmentation is needed since the transformation and co-localization are applied to the raw chamber mesh. Further analysis with a larger mesh database is needed. Funding Acknowledgement Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Instituto de Salud Carlos III and Ministerio de Ciencia, Innovaciόn y Universidades
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Xiao, Yang Zhang, Guoquan Cao, Jiahuan Zhou, Ya Lin, Boyang Chen, Ke Nie, Gangze Fu, Min-Ying Su, and Meihao Wang. "Dynamic change of COVID-19 lung infection evaluated using co-registration of serial chest CT images." Frontiers in Public Health 10 (August 12, 2022). http://dx.doi.org/10.3389/fpubh.2022.915615.

Повний текст джерела
Анотація:
PurposeTo evaluate the volumetric change of COVID-19 lesions in the lung of patients receiving serial CT imaging for monitoring the evolution of the disease and the response to treatment.Materials and methodsA total of 48 patients, 28 males and 20 females, who were confirmed to have COVID-19 infection and received chest CT examination, were identified. The age range was 21–93 years old, with a mean of 54 ± 18 years. Of them, 33 patients received the first follow-up (F/U) scan, 29 patients received the second F/U scan, and 11 patients received the third F/U scan. The lesion region of interest (ROI) was manually outlined. A two-step registration method, first using the Affine alignment, followed by the non-rigid Demons algorithm, was developed to match the lung areas on the baseline and F/U images. The baseline lesion ROI was mapped to the F/U images using the obtained geometric transformation matrix, and the radiologist outlined the lesion ROI on F/U CT again.ResultsThe median (interquartile range) lesion volume (cm3) was 30.9 (83.1) at baseline CT exam, 18.3 (43.9) at first F/U, 7.6 (18.9) at second F/U, and 0.6 (19.1) at third F/U, which showed a significant trend of decrease with time. The two-step registration could significantly decrease the mean squared error (MSE) between baseline and F/U images with p &lt; 0.001. The method could match the lung areas and the large vessels inside the lung. When using the mapped baseline ROIs as references, the second-look ROI drawing showed a significantly increased volume, p &lt; 0.05, presumably due to the consideration of all the infected areas at baseline.ConclusionThe results suggest that the registration method can be applied to assist in the evaluation of longitudinal changes of COVID-19 lesions on chest CT.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chan, Yan Chi Ivy, Minglun Li, Katia Parodi, Claus Belka, Guillaume Landry, and Christopher Kurz. "Feasibility of CycleGAN enhanced low dose CBCT imaging for prostate radiotherapy dose calculation." Physics in Medicine & Biology, April 13, 2023. http://dx.doi.org/10.1088/1361-6560/acccce.

Повний текст джерела
Анотація:
Abstract Objective: Daily CBCT imaging during the course of fractionated radiotherapy treatment can enable online adaptive radiotherapy but also expose patients to a non-negligible amount of radiation dose. This work investigates the feasibility of low dose CBCT imaging capable of enabling accurate prostate radiotherapy dose calculation with only 25% projections by overcoming under-sampling artifacts and correcting CT numbers by employing cycle-consistent generative adversarial networks (cycleGAN).&#xD;&#xD;Approach: Uncorrected CBCTs of 41 prostate cancer patients, acquired with ∼350 projections (CBCTorg), were retrospectively under-sampled to 25% dose images (CBCTLD) with only ∼90 projections and reconstructed using FDK. We adapted a cycleGAN including shape loss to translate CBCTLD into planning CT (pCT) equivalent images (CBCTLD_GAN). An alternative cycleGAN with a generator residual connection was implemented to improve anatomical fidelity (CBCTLD_ResGAN).&#xD;&#xD;Unpaired 4-fold cross-validation (33 patients) was performed to allow using the median of 4 models as output. Deformable image registration was used to generate virtual CTs (vCT) for Hounsfield units (HU) accuracy evaluation on 8 additional test patients. Volumetric modulated arc therapy (VMAT) plans were optimized on vCT, and recalculated on CBCTLD_GAN and CBCTLD_ResGAN to determine dose calculation accuracy. CBCTLD_GAN, CBCTLD_ResGAN and CBCTorg were registered to pCT and residual shifts were analyzed. Bladder and rectum were manually contoured on CBCTLD_GAN, CBCTLD_ResGAN and CBCTorg and compared in terms of dice similarity coefficient (DSC), average and 95th percentile Hausdorff distance (HDavg, HD95).&#xD;&#xD;Main results: The mean absolute error decreased from 126 HU for CBCTLD to 55 HU for CBCTLD_GAN and 44 HU for CBCTLD_ResGAN. For PTV, the median differences of D98%, D50% and D2% comparing CBCTLD_GAN to vCT were 0.3%, 0.3% and 0.3%, and comparing CBCTLD_ResGAN to vCT were 0.4%, 0.3% and 0.4%. Dose accuracy was high with both 2% dose difference pass rates of 99% (10% dose threshold). Compared to CBCTorg-to-pCT registration, the majority of mean absolute differences of rigid transformation parameters were less than 0.20 mm/ 0.20°. For bladder and rectum, the DSC were 0.88 and 0.77 for CBCTLD_GAN and 0.92 and 0.87 for CBCTLD_ResGAN compared to CBCTorg, and HDavg were 1.34 mm and 1.93 mm for CBCTLD_GAN, and 0.90 mm and 1.05 mm for CBCTLD_ResGAN. The computational time was ∼2 s per patient.&#xD;&#xD;Significance: This study investigated the feasibility of adapting two cycleGAN models to simultaneously remove under-sampling artifacts and correct image intensities of 25% dose CBCT images. High accuracy on dose calculation, HU and patient alignment were achieved. CBCTLD_ResGAN achieved better anatomical fidelity.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії