Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Large baseline image registration.

Статті в журналах з теми "Large baseline image registration"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Large baseline image registration".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Fidler, A., B. Likar, F. Pernus, and U. Skaleric. "Impact of JPEG lossy image compression on quantitative digital subtraction radiography." Dentomaxillofacial Radiology 31, no. 2 (March 2002): 106–12. http://dx.doi.org/10.1038/sj/dmfr/4600670.

Повний текст джерела
Анотація:
OBJECTIVES The aim of the study was to evaluate the impact of JPEG lossy image compression on the estimation of alveolar bone gain by quantitative digital subtraction radiography (DSR). METHODS Nine dry domestic pig mandible posterior segments were radiographed three times ('Baseline', 'No change', and 'Gain') with standardized projection geometry. Bone gain was simulated by adding artificial bone chips (1, 4, and 15 mg). Images were either compressed before or after registration. No change areas in compressed and subtracted 'No change-Baseline' images and bone gain volumes in compressed and subtracted 'Gain-Baseline' images were calculated and compared to the corresponding measurements performed on original subtracted images. RESULTS Measurements of no change areas ('No change-Baseline') were only slightly affected by compressions down to JPEG 50 (J50) applied either before or after registration. Simulated gain of alveolar bone ('Gain-Baseline') was underestimated when compression before registration was performed. The underestimation was bigger when small bone chips of 1 mg were measured and when higher compression rates were used. Bone chips of 4 and 15 mg were only slightly underestimated when using J90, J70, and J50 compressions before registration. CONCLUSIONS Lossy JPEG compression does not affect the measurements of no change areas by DSR. Images undergoing subtraction should be registered before compression and if so, J90 compression with a compression ratio of 1:7 can be used to detect and measure 4 mg and larger bone gain.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sun, Quan, Lei Liu, Zhaodong Niu, Yabo Li, Jingyi Zhang, and Zhuang Wang. "A Practical Star Image Registration Algorithm Using Radial Module and Rotation Angle Features." Remote Sensing 15, no. 21 (October 27, 2023): 5146. http://dx.doi.org/10.3390/rs15215146.

Повний текст джерела
Анотація:
Star image registration is the most important step in the application of astronomical image differencing, stacking, and mosaicking, which requires high robustness, accuracy, and real-time capability on the part of the algorithm. At present, there are no high-performance registration algorithms available in this field. In the present paper, we propose a star image registration algorithm that relies only on radial module features (RMF) and rotation angle features (RAF) while providing excellent robustness, high accuracy, and good real-time performance. The test results on a large amount of simulated and real data show that the comprehensive performance of the proposed algorithm is significantly better than the four classical baseline algorithms as judged by the presence of rotation, insufficient overlapping area, false stars, position deviation, magnitude deviation, and complex sky background, making it a more ideal star image registration algorithm than current alternatives.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Shuxin, Shilei Cao, Dong Wei, Cong Xie, Kai Ma, Liansheng Wang, Deyu Meng, and Yefeng Zheng. "Alternative Baselines for Low-Shot 3D Medical Image Segmentation---An Atlas Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 634–42. http://dx.doi.org/10.1609/aaai.v35i1.16143.

Повний текст джерела
Анотація:
Low-shot (one/few-shot) segmentation has attracted increasing attention as it works well with limited annotation. State-of-the-art low-shot segmentation methods on natural images usually focus on implicit representation learning for each novel class, such as learning prototypes, deriving guidance features via masked average pooling, and segmenting using cosine similarity in feature space. We argue that low-shot segmentation on medical images should step further to explicitly learn dense correspondences between images to utilize the anatomical similarity. The core ideas are inspired by the classical practice of multi-atlas segmentation, where the indispensable parts of atlas-based segmentation, i.e., registration, label propagation, and label fusion are unified into a single framework in our work. Specifically, we propose two alternative baselines, i.e., the Siamese-Baseline and Individual-Difference-Aware Baseline, where the former is targeted at anatomically stable structures (such as brain tissues), and the latter possesses a strong generalization ability to organs suffering large morphological variations (such as abdominal organs). In summary, this work sets up a benchmark for low-shot 3D medical image segmentation and sheds light on further understanding of atlas-based few-shot segmentation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Strittmatter, Anika, Anna Caroli, and Frank G. Zöllner. "A Multistage Rigid-Affine-Deformable Network for Three-Dimensional Multimodal Medical Image Registration." Applied Sciences 13, no. 24 (December 16, 2023): 13298. http://dx.doi.org/10.3390/app132413298.

Повний текст джерела
Анотація:
Multimodal image registration is an important component of medical image processing, allowing the integration of complementary information from various imaging modalities to improve clinical applications like diagnosis and treatment planning. We proposed a novel multistage neural network for three-dimensional multimodal medical image registration, which addresses the challenge of larger rigid deformations commonly present in medical images due to variations in patient positioning in different scanners and rigid anatomical structures. This multistage network combines rigid, affine and deformable transformations in three stages. The network was trained unsupervised with Mutual Information and Gradient L2 loss. We compared the results of our proposed multistage network with a rigid-affine-deformable registration with the classical registration method NiftyReg as a baseline and a multistage network, which combines affine and deformable transformation, as a benchmark. To evaluate the performance of the proposed multistage network, we used four three-dimensional multimodal in vivo datasets: three renal MR datasets consisting of T1-weighted and T2-weighted MR scans and one liver dataset containing CT and T1-weighted MR scans. Experimental results showed that combining rigid, affine and deformable transformations in a multistage network leads to registration results with a high structural similarity, overlap of the corresponding structures (Dice: 76.7 ± 12.5, 61.1 ± 14.0, 64.8 ± 16.2, 68.1 ± 24.6 for the four datasets) and a low level of image folding (|J| ≤ 0: less than or equal to 1.1%), resulting in a medical plausible registration result.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yao, Guobiao, Jin Zhang, Jianya Gong, and Fengxiang Jin. "Automatic Production of Deep Learning Benchmark Dataset for Affine-Invariant Feature Matching." ISPRS International Journal of Geo-Information 12, no. 2 (January 19, 2023): 33. http://dx.doi.org/10.3390/ijgi12020033.

Повний текст джерела
Анотація:
To promote the development of deep learning for feature matching, image registration, and three-dimensional reconstruction, we propose a method of constructing a deep learning benchmark dataset for affine-invariant feature matching. Existing images often have large viewpoint differences and areas with weak texture, which may cause difficulties for image matching, with respect to few matches, uneven distribution, and single matching texture. To solve this problem, we designed an algorithm for the automatic production of a benchmark dataset for affine-invariant feature matching. It combined two complementary algorithms, ASIFT (Affine-SIFT) and LoFTR (Local Feature Transformer), to significantly increase the types of matching patches and the number of matching features and generate quasi-dense matches. Optimized matches with uniform spatial distribution were obtained by the hybrid constraints of the neighborhood distance threshold and maximum information entropy. We applied this algorithm to the automatic construction of a dataset containing 20,000 images: 10,000 ground-based close-range images, 6000 satellite images, and 4000 aerial images. Each image had a resolution of 1024 × 1024 pixels and was composed of 128 pairs of corresponding patches, each with 64 × 64 pixels. Finally, we trained and tested the affine-invariant deep learning model, AffNet, separately on our dataset and the Brown dataset. The experimental results showed that the AffNet trained on our dataset had advantages, with respect to the number of matching points, match correct rate, and matching spatial distribution on stereo images with large viewpoint differences and weak texture. The results verified the effectiveness of the proposed algorithm and the superiority of our dataset. In the future, our dataset will continue to expand, and it is intended to become the most widely used benchmark dataset internationally for the deep learning of wide-baseline image matching.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Schmit, Timothy J., Paul Griffith, Mathew M. Gunshor, Jaime M. Daniels, Steven J. Goodman, and William J. Lebair. "A Closer Look at the ABI on the GOES-R Series." Bulletin of the American Meteorological Society 98, no. 4 (April 1, 2017): 681–98. http://dx.doi.org/10.1175/bams-d-15-00230.1.

Повний текст джерела
Анотація:
Abstract The Advanced Baseline Imager (ABI) on board the Geostationary Operational Environmental Satellite-R (GOES-R) is America’s next-generation geostationary advanced imager. GOES-R launched on 19 November 2016. The ABI is a state-of-the-art 16-band radiometer, with spectral bands covering the visible, near-infrared, and infrared portions of the electromagnetic spectrum. Many attributes of the ABI—such as spectral, spatial, and temporal resolution; radiometrics; and image navigation/registration—are much improved from the current series of GOES imagers. This paper highlights and discusses the expected improvements of each of these attributes. From ABI data many higher-level-derived products can be generated and used in a large number of environmental applications. The ABI’s design allows rapid-scan and contiguous U.S. imaging automatically interleaved with full-disk scanning. In this paper the expected instrument attributes are covered, as they relate to signal-to-noise ratio, image navigation and registration, the various ABI scan modes, and other parameters. There will be several methods for users to acquire GOES-R imagery and products depending on their needs. These include direct reception of the imagery via the satellite downlink and an online-accessible archive. The information from the ABI on the GOES-R series will be used for many applications related to severe weather, tropical cyclones and hurricanes, aviation, natural hazards, the atmosphere, the ocean, and the cryosphere. The ABI on the GOES-R series is America’s next-generation geostationary advanced imager and will dramatically improve the monitoring of many phenomena at finer time and space scales.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Johnson, J. Patrick, Doniel Drazin, Wesley A. King, and Terrence T. Kim. "Image-guided navigation and video-assisted thoracoscopic spine surgery: the second generation." Neurosurgical Focus 36, no. 3 (March 2014): E8. http://dx.doi.org/10.3171/2014.1.focus13532.

Повний текст джерела
Анотація:
Object Video-assisted thoracoscopic surgery (VATS) has evolved for treatment of a variety of spinal disorders. Early incorporation with image-guided surgery (IGS) was challenged due to reproducibility and adaptability, limiting the procedure's acceptance. In the present study, the authors report their experience with second-generation IGS and VATS technologies for anterior thoracic minimally invasive spinal (MIS) procedures. Methods The surgical procedure is described in detail including operating room set-up, patient positioning (a lateral decubitus position), placement of the spinal reference frame and portal, radiographic localization, registration, surgical instruments, and the image-guided thoracoscopic discectomy. Results Combined IGS and VATS procedures were successfully performed and assisted in anatomical localization in 14 patients. The mean patient age was 59 years (range 32–73 years). Disc herniation pathology represented the most common indication for surgery (n = 8 patients); intrathoracic spinal tumors were present in 4 patients and the remaining patients had infection and ossification of the posterior longitudinal ligament. All patients required chest tube drainage postoperatively, and all but 1 patient had drainage discontinued the following day. The only complication was a seroma that was presumed to be due to steroid therapy for postoperative weakness. At the final follow-up, 11 of the patients were improved neurologically, 2 patients had baseline neurological status, and the 1 patient with postoperative weakness was able to ambulate, albeit with an assistive device. The evolution of thoracoscopic surgical procedures occurring over 20 years is presented, including their limitations. The combination of VATS and IGS technologies is discussed including their safety and the importance of 3D imaging. In cases of large open thoracotomy procedures, surgeries require difficult, extensive, and invasive access through the chest cavity; using a MIS procedure can potentially eliminate many of the complications and morbidities associated with large open procedures. The authors report their experience with thoracic spinal surgeries that involved MIS procedures and the new technologies. Conclusions The most significant advance in IGS procedures has resulted from intraoperative CT scanning and automatic registration with the IGS workstation. Image guidance can be used in conjunction with VATS techniques for thoracic discectomy, spinal tumors, infection, and ossification of the posterior longitudinal ligament. The authors' initial experience has revealed this technique to be useful and potentially applicable to other MIS procedures.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wu, Zhenning, Xiaolei Lv, Ye Yun, and Wei Duan. "A Parallel Sequential SBAS Processing Framework Based on Hadoop Distributed Computing." Remote Sensing 16, no. 3 (January 25, 2024): 466. http://dx.doi.org/10.3390/rs16030466.

Повний текст джерела
Анотація:
With the rapid development of microwave remote sensing and SAR satellite systems, the use of InSAR techniques has been greatly encouraged due to the abundance of SAR data with unprecedented temporal and spatial coverage. Small Baseline Subset (SBAS) is a promising time-series InSAR method for applications involving deformation monitoring of the Earth’s crust, and the sequential SBAS method is an extension of SBAS that allows long-term and large-scale surface displacements to be obtained with continuously auto-updating measurement results. As the Chinese LuTan-1 SAR system has begun acquiring massive SAR image data, the need for an efficient and lightweight InSAR processing platform has become urgent in various research fields. However, traditional sequential algorithms are incapable of meeting the huge challenges of low efficiency and frequent human interaction in large-scale InSAR data processing. Therefore, this study proposes a distributed parallel sequential SBAS (P2SBAS) processing chain based on Hadoop by effectively parallelizing and improving the current sequential SBAS method. P2SBAS mainly consists of two components: (1) a distributed SAR data storage platform based on HDFS, which supports efficient inter-node data transfer and continuous online data acquisition, and (2) several parallel InSAR processing algorithms based on the MapReduce model, including image registration, filtering, phase unwrapping, sequential SBAS processing, and so on. By leveraging the capabilities associated with the distributed nature of the Hadoop platform, these algorithms are able to efficiently utilize the segmentation strategy and perform careful boundary processing. These parallelized InSAR algorithm modules can achieve their goals on different nodes in the Hadoop distributed environment, thereby maximizing computing resources and improving the overall performance while comprehensively considering performance and precision. In addition, P2SBAS provides better computing and storage capabilities for small- and medium-sized teams compared to popular InSAR processing approaches based on cloud computing or supercomputing platforms, and it can be easily deployed on clusters thanks to the integration of various existing computing components. Finally, to demonstrate and evaluate the efficiency and accuracy of P2SBAS, we conducted comparative experiments on a set of 32 TerraSAR images of Beijing, China. The results demonstrate that P2SBAS can fully utilize various computing nodes to improve InSAR processing and can be applied well in large-scale LuTan-1 InSAR applications in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Jingyi, Mengqi Han, Yongsen Chen, Bin Wu, Yifan Wu, Weijie Jia, JianMo Liu, et al. "Prediction of recurrent ischaemic stroke using radiomics data and machine learning methods in patients with acute ischaemic stroke: protocol for a multicentre, large sample, prospective observational cohort study in China." BMJ Open 13, no. 10 (October 2023): e076406. http://dx.doi.org/10.1136/bmjopen-2023-076406.

Повний текст джерела
Анотація:
IntroductionStroke is a leading cause of mortality and disability worldwide. Recurrent strokes result in prolonged hospitalisation and worsened functional outcomes compared with the initial stroke. Thus, it is critical to identify patients who are at high risk of stroke recurrence. This study is positioned to develop and validate a prediction model using radiomics data and machine learning methods to identify the risk of stroke recurrence in patients with acute ischaemic stroke (AIS).Methods and analysisA total of 1957 patients with AIS will be needed. Enrolment at participating hospitals will continue until the required sample size is reached, and we will recruit as many participants as possible. Multiple indicators including basic clinical data, image data, laboratory data, CYP2C19 genotype and follow-up data will be assessed at various time points during the registry, including baseline, 24 hours, 7 days, 1 month, 3 months, 6 months, 9 months and 12 months. The primary outcome was stroke recurrence. The secondary outcomes were death events, prognosis scores and adverse events. Imaging images were processed using deep learning algorithms to construct a programme capable of automatically labelling the lesion area and extracting radiomics features. The machine learning algorithms will be applied to integrate cross-scale, multidimensional data for exploratory analysis. Then, an ischaemic stroke recurrence prediction model of the best performance for patients with AIS will be established. Calibration, receiver operating characteristic and decision curve analyses will be evaluated.Ethics and disseminationThis study has received ethical approval from the Medical Ethics Committee of the Second Affiliated Hospital of Nanchang University (medical research review No.34/2021), and informed consent will be obtained voluntarily. The research findings will be disseminated through publication in journals and presented at conferences.Trial registration numberChiCTR2200055209.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

De Backer, Wilfried, Jan De Backer, Ilse Verlinden, Glenn Leemans, Cedric Van Holsbeke, Benjamin Mignot, Martin Jenkins, et al. "Functional respiratory imaging assessment of glycopyrrolate and formoterol fumarate metered dose inhalers formulated using co-suspension delivery technology in patients with COPD." Therapeutic Advances in Respiratory Disease 14 (January 2020): 175346662091699. http://dx.doi.org/10.1177/1753466620916990.

Повний текст джерела
Анотація:
Background: Functional respiratory imaging (FRI) is a quantitative postprocessing imaging technique used to assess changes in the respiratory system. Using FRI, we characterized the effects of the long-acting muscarinic antagonist (LAMA), glycopyrrolate metered dose inhaler (GP MDI), and the long-acting β2-agonist (LABA), formoterol fumarate metered dose inhaler (FF MDI), on airway volume and resistance in patients with moderate-to-severe chronic obstructive pulmonary disease. Methods: Patients in this phase IIIb, randomized, double-blind crossover study received twice-daily GP MDI (18 μg) and FF MDI (9.6 μg). Primary endpoints were specific (i.e. corrected for lobar volume) image-based airway volume (siVaw) and specific image-based airway resistance (siRaw), measured using FRI. Secondary and other endpoints included additional FRI, spirometry, and body plethysmography parameters. Postdose efficacy assessments were performed within 60–150 min of dosing on day 15. Results: A total of 23 patients were randomized and 19 completed both treatment periods. GP MDI and FF MDI both achieved significant improvements from baseline to day 15 in siVaw [11% ( p = 0.0187) and 23% ( p < 0.0001) increases, respectively] and siRaw [25% ( p = 0.0219) and 44% ( p < 0.0001) reductions, respectively]. Although, on average, improvements were larger for FF MDI than GP MDI, some individuals displayed greater responses with each of the two treatments. These within-patient differences increased with airway generation number. Spirometry and body plethysmography endpoints showed significant improvements from baseline in inspiratory capacity for both treatments, and numeric improvements for other endpoints. Conclusion: Both GP MDI and FF MDI significantly improved siRaw and siVaw at day 15 versus baseline. FRI endpoints demonstrated increased sensitivity relative to spirometry and body plethysmography in detecting differences between treatments in a small number of patients. Intra-patient differences in treatment response between the LAMA and the LABA provide further support for the benefit of dual bronchodilator therapies. ClinicalTrials.gov registration number: NCT02937584 The reviews of this paper are available via the supplemental material section.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Taguchi, Kayoko, Noriko Numata, Rieko Takanashi, Ryo Takemura, Tokiko Yoshida, Kana Kutsuzawa, Kensuke Yoshimura, Natsuko Nozaki-Taguchi, Seiji Ohtori, and Eiji Shimizu. "Clinical Effectiveness and Cost-effectiveness of Videoconference-Based Integrated Cognitive Behavioral Therapy for Chronic Pain: Randomized Controlled Trial." Journal of Medical Internet Research 23, no. 11 (November 22, 2021): e30690. http://dx.doi.org/10.2196/30690.

Повний текст джерела
Анотація:
Background Cognitive behavioral therapy is known to improve the management of chronic pain. However, the components of this therapy are still being investigated and debated. Objective This study aimed to examine the effectiveness of an integrated cognitive behavioral therapy program with new components (attention-shift, memory work, video feedback, and image training) delivered via videoconferencing. Methods This study was unblinded and participants were recruited and assessed face-to-face in the outpatient department. We conducted a randomized controlled trial for chronic pain to compare 16 weekly videoconference-based cognitive behavioral therapy (vCBT) sessions provided by a therapist with treatment as usual (TAU). Thirty patients (age range, 22-75 years) with chronic pain were randomly assigned to either vCBT (n=15) or TAU (n=15). Patients were evaluated at week 1 (baseline), week 8 (midintervention), and week 16 (postintervention). The primary outcome was the change in pain intensity, which was recorded using the numerical rating scale at 16 weeks from the baseline. Secondary outcomes were pain severity and pain interference, which were assessed using the Brief Pain Inventory. Additionally, we evaluated disability, pain catastrophizing cognition, depression, anxiety, quality of life, and cost utility. Results In the eligibility assessment, 30 patients were eventually randomized and enrolled; finally, 15 patients in the vCBT and 14 patients in the TAU group were analyzed. Although no significant difference was found between the 2 groups in terms of changes in pain intensity by the numerical rating scale scores at week 16 from baseline (P=.36), there was a significant improvement in the comprehensive evaluation of pain by total score of Brief Pain Inventory (–1.43, 95% CI –2.49 to –0.37, df=24; P=.01). Further, significant improvement was seen in pain interference by using the Brief Pain Inventory (–9.42, 95% CI –14.47 to –4.36, df=25; P=.001) and in disability by using the Pain Disability Assessment Scale (–1.95, 95% CI –3.33 to –0.56, df=24; P=.008) compared with TAU. As for the Medical Economic Evaluation, the incremental cost-effectiveness ratio for 1 year was estimated at 2.9 million yen (about US $25,000) per quality-adjusted life year gained. Conclusions The findings of our study suggest that integrated cognitive behavioral therapy delivered by videoconferencing in regular medical care may reduce pain interference but not pain intensity. Further, this treatment method may be cost-effective, although this needs to be further verified using a larger sample size. Trial Registration University Hospital Medical Information Network UMIN000031124; https://tinyurl.com/2pr3xszb
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Bukhari, Ali, Vivek Kesari, Reza Sirous, Noa G. Holtzman, Seung Tae Lee, Natalie Gahres, Elizabeth Hutnick, et al. "Increased Cortical Glycolysis Following CD19 CART Therapy: A Radiographic Surrogate for an Altered Blood-Brain Barrier." Blood 134, Supplement_1 (November 13, 2019): 4454. http://dx.doi.org/10.1182/blood-2019-125794.

Повний текст джерела
Анотація:
Background: Cytokine release syndrome (CRS) and immune effector cell-associated neurotoxicity syndrome (ICANS) are known complications of chimeric antigen receptor T-cell (CAR-T) therapy. These clinical syndromes develop as a result of CAR-T activation, proliferation, and tumor lysis with resultant cytokine secretion. In prior reports of CD19 CAR-T therapy patients, those who developed ICANS showed evidence of endothelial activation and disruption of the blood-brain barrier as a result of cytokine release while only approximately one-third demonstrated changes on Brain MRI (Gust et al. Cancer Discov 2017). As such, further predictive markers and studies are needed to identify patients at risk for ICANS to allow for expedited management and improved outcomes. Herein we report a single-center analysis exploring glycolytic activity on PET/CT and the association with clinical outcomes for patients with relapsed/refractory diffuse large B-cell lymphoma (R/R DLBCL) after CAR-T therapy. Methods: An organ-based evaluation of uninvolved sites was conducted in R/R DLBCL patients (n=32) who underwent CD19 CAR-T therapy with evaluable PET/CT imaging at baseline immediately prior to CAR-T therapy and at 30 days post-infusion (D+30). All patients in this analysis were treated with axicabtagene ciloleucel as standard of care therapy after 2 or more lines of therapy. Tumor metabolic volume (TMV) and mean standard uptake value (SUVmean) of various organs were quantified using ROVER [Region of interest (ROI) visualization, evolution, and image registration] software (ABX advanced biochemical compounds GmbH, Radeberg, Germany). Statistical analysis was completed using STATA 14 (StataCorp. 2015. Stata Statistical Software: Release 14. College Station, TX: StataCorp LP). All tests were performed after testing the normality distribution assumption. Temporal changes were assessed using paired t-tests, and between-group analyses were completed with two-sample t-tests. Results: SUVmean increased significantly after CAR-T therapy in the following organs (D+30 v baseline pre-CAR-T PET/CT): cerebral cortex (8.23 v 7.09, p=0.036), cerebellum (6.26 v 5.56, p=0.024), basal ganglia (9.22 v 7.61, p=0.005), parotid gland (1.61 v 1.42, p=0.004), liver (2.47 v 2.17, p=0.002), spleen (2.08 v 1.84, p=0.043), and pancreas (1.76 v 1.48, p<0.001). No differences in SUVmean were seen in the lung, testes, retroperitoneal or subcutaneous fat, or paraspinal and psoas muscles. A significant increase in cortical activity was seen in patients with CRS grades ≥2 when compared to those with CRS grades 0-1 (Δ2.65 v Δ0.33, p=0.03). No changes in glycolytic activity were observed between patients stratified by CRS in the cerebellum (Δ1.31 v Δ0.36, p=0.12) or liver (Δ0.21 v Δ0.34, p=0.55). In contrast, changes in glycolytic activity were not significantly associated with development of ICANS or with treatment responses. Conclusion: For patients with R/R DLBCL undergoing CD19 CAR-T therapy, significantly increased CNS glycolytic activity is seen on PET/CT at D+30 post-infusion when compared to baseline. Interestingly, these changes do not correlate with development of ICANS or lymphoma response; however, changes in cortical activity were associated with CRS grade ≥2. Overall, our findings illustrate a functional and radiographic link between cytokine release and subsequent disruption of the blood-brain barrier as quantified by increased cortical glycolysis 30 days post-CAR-T therapy. While findings are limited by small sample size, further validation in a larger data set is warranted. Disclosures Hutnick: Kite/Gilead: Other: Yescarta Speakers Bureau, Speakers Bureau. Badros:Celgene Corporation: Consultancy; Amgen: Consultancy.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Chen, Chong, and Ozan Öktem. "Indirect Image Registration with Large Diffeomorphic Deformations." SIAM Journal on Imaging Sciences 11, no. 1 (January 2018): 575–617. http://dx.doi.org/10.1137/17m1134627.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Fischer, Bernd, and Jan Modersitzki. "Large scale problems arising from image registration." GAMM-Mitteilungen 27, no. 2 (December 2004): 104–20. http://dx.doi.org/10.1002/gamm.201490006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Foskey, Mark, Brad Davis, Lav Goyal, Sha Chang, Ed Chaney, Nathalie Strehl, Sandrine Tomei, Julian Rosenman, and Sarang Joshi. "Large deformation three-dimensional image registration in image-guided radiation therapy." Physics in Medicine and Biology 50, no. 24 (December 6, 2005): 5869–92. http://dx.doi.org/10.1088/0031-9155/50/24/008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Huang, Xishi, Jing Ren, Anwar Abdalbari, and Mark Green. "Deformable image registration for tissues with large displacements." Journal of Medical Imaging 4, no. 1 (January 25, 2017): 014001. http://dx.doi.org/10.1117/1.jmi.4.1.014001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Li, Xiangguo. "High-Accuracy Subpixel Image Registration With Large Displacements." IEEE Transactions on Geoscience and Remote Sensing 55, no. 11 (November 2017): 6265–76. http://dx.doi.org/10.1109/tgrs.2017.2724303.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Baldwin, D. G., and W. J. Emery. "A systematized approach to AVHRR image navigation." Annals of Glaciology 17 (1993): 414–20. http://dx.doi.org/10.3189/s0260305500013185.

Повний текст джерела
Анотація:
Accurate co-location and geo-registration of AVHRR (Advanced Very High Resolution Radiometer) is necessary for most applications involving these data. To obtain the necessary accuracy, most of the current systems available for AVHRR image rectification require corrections to errors arising from several different sources. This paper presents a software package which is able to correct registration errors from all sources using effective roll, pitch, and yaw values determined from ground control point offsets. These effective values are used to establish a baseline registration against which other models are compared. Spacecraft and sampling geometry are revisited in order to systemize and better define the procedure of AVHRR geo-registration. This systematic approach has resulted in a modular software package which has a registration accuracy of 1 satellite pixel, and is easily modified and improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Baldwin, D. G., and W. J. Emery. "A systematized approach to AVHRR image navigation." Annals of Glaciology 17 (1993): 414–20. http://dx.doi.org/10.1017/s0260305500013185.

Повний текст джерела
Анотація:
Accurate co-location and geo-registration of AVHRR (Advanced Very High Resolution Radiometer) is necessary for most applications involving these data. To obtain the necessary accuracy, most of the current systems available for AVHRR image rectification require corrections to errors arising from several different sources. This paper presents a software package which is able to correct registration errors from all sources using effective roll, pitch, and yaw values determined from ground control point offsets. These effective values are used to establish a baseline registration against which other models are compared. Spacecraft and sampling geometry are revisited in order to systemize and better define the procedure of AVHRR geo-registration. This systematic approach has resulted in a modular software package which has a registration accuracy of 1 satellite pixel, and is easily modified and improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Haines, Rachel H., Kim S. Thomas, Alan A. Montgomery, Jane C. Ravenscroft, Perways Akram, Joanne R. Chalmers, Diane Whitham, et al. "Home interventions and light therapy for the treatment of vitiligo (HI-Light Vitiligo Trial): study protocol for a randomised controlled trial." BMJ Open 8, no. 4 (April 2018): e018649. http://dx.doi.org/10.1136/bmjopen-2017-018649.

Повний текст джерела
Анотація:
Introduction Vitiligo is a condition resulting in white patches on the skin. People with vitiligo can suffer from low self-esteem, psychological disturbance and diminished quality of life. Vitiligo is often poorly managed, partly due to lack of high-quality evidence to inform clinical care. We describe here a large, independent, randomised controlled trial (RCT) assessing the comparative effectiveness of potent topical corticosteroid, home-based hand-held narrowband ultraviolet B-light (NB-UVB) or combination of the two, for the management of vitiligo. Methods and analysis The HI-Light Vitiligo Trial is a multicentre, three-arm, parallel group, pragmatic, placebo-controlled RCT. 516 adults and children with actively spreading, but limited, vitiligo are randomised (1:1:1) to one of three groups: mometasone furoate 0.1% ointment plus dummy NB-UVB light, vehicle ointment plus NB-UVB light or mometasone furoate 0.1% ointment plus NB-UVB light. Treatment of up to three patches of vitiligo is continued for up to 9 months with clinic visits at baseline, 3, 6 and 9 months and four post-treatment questionnaires. The HI-Light Vitiligo Trial assesses outcomes included in the vitiligo core outcome set and places emphasis on participants’ views of treatment success. The primary outcome is proportion of participants achieving treatment success (patient-rated Vitiligo Noticeability Scale) for a target patch of vitiligo at 9 months with further independent blinded assessment using digital images of the target lesion before and after treatment. Secondary outcomes include time to onset of treatment response, treatment success by body region, percentage repigmentation, quality of life, time-burden of treatment, maintenance of response, safety and within-trial cost-effectiveness. Ethics and dissemination Approvals were granted by East Midlands—Derby Research Ethics Committee (14/EM/1173) and the MHRA (EudraCT 2014-003473-42). The trial was registered 8 January 2015 ISRCTN (17160087). Results will be published in full as open access in the NIHR Journal library and elsewhere. Trial registration number ISRCTN17160087.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Cocianu, Cătălina-Lucia, Cristian Răzvan Uscatu, and Alexandru Daniel Stan. "Evolutionary Image Registration: A Review." Sensors 23, no. 2 (January 14, 2023): 967. http://dx.doi.org/10.3390/s23020967.

Повний текст джерела
Анотація:
Image registration is one of the most important image processing tools enabling recognition, classification, detection and other analysis tasks. Registration methods are used to solve a large variety of real-world problems, including remote sensing, computer vision, geophysics, medical image analysis, surveillance, and so on. In the last few years, nature-inspired algorithms and metaheuristics have been successfully used to address the image registration problem, becoming a solid alternative for direct optimization methods. The aim of this paper is to investigate and summarize a series of state-of-the-art works reporting evolutionary-based registration methods. The papers were selected using the PRISMA 2020 method. The reported algorithms are reviewed and compared in terms of evolutionary components, fitness function, image similarity measures and algorithm accuracy indexes used in the alignment process.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Zhao, Yinsen, Farong Gao, Jun Yu, Xing Yu, and Zhangyi Yang. "Underwater Image Mosaic Algorithm Based on Improved Image Registration." Applied Sciences 11, no. 13 (June 27, 2021): 5986. http://dx.doi.org/10.3390/app11135986.

Повний текст джерела
Анотація:
In order to obtain panoramic images in a low contrast underwater environment, an underwater panoramic image mosaic algorithm based on image enhancement and improved image registration (IIR) was proposed. Firstly, mixed filtering and sigma filtering are used to enhance the contrast of the original image and de-noise the image. Secondly, scale-invariant feature transform (SIFT) is used to detect image feature points. Then, the proposed IIR algorithm is applied to image registration to improve the matching accuracy and reduce the matching time. Finally, the weighted smoothing method is used for image fusion to avoid image seams. The results show that IIR algorithm can effectively improve the registration accuracy, shorten the registration time, and improve the image fusion effect. In the field of cruise research, instruments equipped with imaging systems, such as television capture and deep-drag camera systems, can produce a large number of image or video recordings. This algorithm provides support for fast and accurate underwater image mosaic and has important practical significance.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zhao, Liya, and Kebin Jia. "Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations." Computational and Mathematical Methods in Medicine 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/836202.

Повний текст джерела
Анотація:
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Himthani, Naveen, Malte Brunn, Jae-Youn Kim, Miriam Schulte, Andreas Mang, and George Biros. "CLAIRE—Parallelized Diffeomorphic Image Registration for Large-Scale Biomedical Imaging Applications." Journal of Imaging 8, no. 9 (September 16, 2022): 251. http://dx.doi.org/10.3390/jimaging8090251.

Повний текст джерела
Анотація:
We study the performance of CLAIRE—a diffeomorphic multi-node, multi-GPU image-registration algorithm and software—in large-scale biomedical imaging applications with billions of voxels. At such resolutions, most existing software packages for diffeomorphic image registration are prohibitively expensive. As a result, practitioners first significantly downsample the original images and then register them using existing tools. Our main contribution is an extensive analysis of the impact of downsampling on registration performance. We study this impact by comparing full-resolution registrations obtained with CLAIRE to lower resolution registrations for synthetic and real-world imaging datasets. Our results suggest that registration at full resolution can yield a superior registration quality—but not always. For example, downsampling a synthetic image from 10243 to 2563 decreases the Dice coefficient from 92% to 79%. However, the differences are less pronounced for noisy or low contrast high resolution images. CLAIRE allows us not only to register images of clinically relevant size in a few seconds but also to register images at unprecedented resolution in reasonable time. The highest resolution considered are CLARITY images of size 2816×3016×1162. To the best of our knowledge, this is the first study on image registration quality at such resolutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Wang, Chang, Qiongqiong Ren, Xin Qin, and Yi Yu. "Adaptive Diffeomorphic Multiresolution Demons and Their Application to Same Modality Medical Image Registration with Large Deformation." International Journal of Biomedical Imaging 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/7314612.

Повний текст джерела
Анотація:
Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method’s normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Wang, Chengjia, Guang Yang, and Giorgos Papanastasiou. "Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis." Sensors 22, no. 6 (March 9, 2022): 2125. http://dx.doi.org/10.3390/s22062125.

Повний текст джерела
Анотація:
Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non-rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non-rigid transformations, simultaneously. Moreover, inverse-consistency is a fundamental inter-modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent representations, and involves two factorised transformation networks (one per each encoder-decoder channel) and an inverse-consistency loss to learn topology-preserving anatomical transformations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI data experiments. We focus on explaining model-data components to enhance model explainability in medical image registration. On computational time experiments, we show that the FIRE model performs on a memory-saving mode, as it can inherently learn topology-preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi-modal image registrations in the clinical setting.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Li, Kai, Yongsheng Zhang, Zhenchao Zhang, and Guangling Lai. "A Coarse-to-Fine Registration Strategy for Multi-Sensor Images with Large Resolution Differences." Remote Sensing 11, no. 4 (February 25, 2019): 470. http://dx.doi.org/10.3390/rs11040470.

Повний текст джерела
Анотація:
Automatic image registration for multi-sensors has always been an important task for remote sensing applications. However, registration for images with large resolution differences has not been fully considered. A coarse-to-fine registration strategy for images with large differences in resolution is presented. The strategy consists of three phases. First, the feature-base registration method is applied on the resampled sensed image and the reference image. Edge point features acquired from the edge strength map (ESM) of the images are used to pre-register two images quickly and robustly. Second, normalized mutual information-based registration is applied on the two images for more accurate transformation parameters. Third, the final transform parameters are acquired through direct registration between the original high- and low-resolution images. Ant colony optimization (ACO) for continuous domain is adopted to optimize the similarity metrics throughout the three phases. The proposed method has been tested on image pairs with different resolution ratios from different sensors, including satellite and aerial sensors. Control points (CPs) extracted from the images are used to calculate the registration accuracy of the proposed method and other state-of-the-art methods. The feature-based preregistration validation experiment shows that the proposed method effectively narrows the value range of registration parameters. The registration results indicate that the proposed method performs the best and achieves sub-pixel registration accuracy of images with resolution differences from 1 to 50 times.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Huo, Chunlei, Chunhong Pan, Leigang Huo, and Zhixin Zhou. "Multilevel SIFT Matching for Large-Size VHR Image Registration." IEEE Geoscience and Remote Sensing Letters 9, no. 2 (March 2012): 171–75. http://dx.doi.org/10.1109/lgrs.2011.2163491.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Beg, Mirza Faisal, and Ali Khan. "Symmetric Data Attachment Terms for Large Deformation Image Registration." IEEE Transactions on Medical Imaging 26, no. 9 (September 2007): 1179–89. http://dx.doi.org/10.1109/tmi.2007.898813.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Xie, J., X. Lv, C. Chu, R. Liu, F. Mo, B. Li, and C. Wang. "AN IMPROVED IMAGE REGISTRATION ALGORITHM FOR THERMAL INFRARED AND PANCHROMATIC IMAGE BASED ON GEOMETRIC STRUCTURAL PROPERTIES." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-1/W1-2023 (December 5, 2023): 145–52. http://dx.doi.org/10.5194/isprs-annals-x-1-w1-2023-145-2023.

Повний текст джерела
Анотація:
Abstract. In order to improve the image registration accuracy of heterogeneous remote sensing images with large radiation differences, an improved image registration algorithm for thermal infrared and panchromatic images is proposed. This method uses the phase consistency of intensity and direction to construct a geometric structure feature descriptor called the Histogram of Oriented Phase Congruency (HOPC) for image registration. It employs the Curvature Scale Space (CSS) corner detection to concentrate and extract feature points in contour areas, and defines a similarity metric (called HOPCn) based on the Euclidean distance between the descriptors for high-precision heterogeneous images registration. Five sets of panchromatic and thermal infrared images were selected for verification. The results show that the improved algorithm can match the homonymy points around the image contour, and the Correct Matching Rate (CMR) is improved by 2.7% and the Root Mean Square Error (RMSE) is improved by 3.7% compared to the original algorithm. This proves that the proposed method has good robustness under conditions of large radiation differences.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Roy-Chowdhury, A. K., R. Chellappa, and T. Keaton. "Wide Baseline Image Registration With Application to 3-D Face Modeling." IEEE Transactions on Multimedia 6, no. 3 (June 2004): 423–34. http://dx.doi.org/10.1109/tmm.2004.827511.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Heiselman, Jon Stanley, Natally Horvat, Burcin Agridag-Upcinar, Onur Yildirim, Brett Logan Ecker, Eileen Mary O'Reilly, Jeffrey A. Drebin, et al. "Inter-reader variability of imaging biomarkers for therapeutic response in pancreatic ductal adenocarcinoma." Journal of Clinical Oncology 42, no. 3_suppl (January 20, 2024): 700. http://dx.doi.org/10.1200/jco.2024.42.3_suppl.700.

Повний текст джерела
Анотація:
700 Background: Indeterminate borders of pancreatic ductal adenocarcinoma (PDAC) can impair reliability of lesion annotation in imaging assessment of therapeutic response (TR) after neoadjuvant chemotherapy (NAT). Previous work showed that longitudinal image registration algorithms can track treatment-related changes and produce quantitative imaging biomarkers (QIB) for TR that predict overall (OS) and recurrence-free survival (RFS). We compare inter-reader agreements of these biomarkers against conventional RECIST and tumor volume QIB. Methods: N=30 patients enrolled in a Phase II clinical trial comparing outcomes of a NAT regimen were retrospectively analyzed. In baseline and restaging CT, whole pancreas and tumor segmentations and tumor diameters were annotated by 2 expert attending radiologists and 2 novice radiology fellows. For each reader, percent change in tumor diameter (RECIST score), percent change in tumor volume (ΔV), and difference in ratio of tumor burden to pancreas volume (ΔB) were computed as QIB for TR across the NAT interval via two approaches: image registration of the baseline tumor region of interest, and direct segmentation of the PDAC lesion in baseline and restaging images. Inter-reader agreements of RECIST, ΔV, and ΔB QIB from both approaches were compared via concordance correlation coefficient (CCC). Associations with OS and RFS were compared over 5-year follow-up via Harrell’s C-index. Results: Among expert radiologists, RECIST score CCC was 0.77 [95% CI: 0.59-0.88], whereas image registration QIB CCC values were significantly higher (ΔV: 0.95 [0.90-0.98], p=0.002; ΔB: 0.93 [0.86-0.97], p=0.014). ΔV and ΔB QIB measured by direct tumor segmentation had significantly lower CCC than by image registration (p<0.001). Among novice readers, image registration QIB (ΔV: 0.83 [0.67-0.91], ΔB: 0.70 [0.48-0.84]) produced higher agreement than RECIST (0.26 [-0.17-0.61]; p=0.002 and p=0.035), with CCC more similar to expert readers. While image registration QIB did not differ from RECIST in association with RFS, registration-based QIB predicted OS with significantly higher C-index (ΔV: 0.58 [0.56-0.60], p<0.001; ΔB: 0.64 [0.62-0.66], p<0.001) than RECIST (0.50 [0.45-0.55]) among experts. Conclusions: Longitudinal registration of the pancreas yields more consistent QIB for PDAC TR with stronger association to OS outcomes than RECIST. These registration-based QIB aim to advance PDAC management by improving patient-specific discrimination of TR after NAT. [Table: see text]
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Sun, Wanfeng, Haibo Gao, and Cheng Li. "A Two-Stage Registration Strategy for Thermal–Visible Images in Substations." Applied Sciences 14, no. 3 (January 30, 2024): 1158. http://dx.doi.org/10.3390/app14031158.

Повний текст джерела
Анотація:
The analysis of infrared video images is becoming one of the methods used to detect thermal hazards in many large-scale engineering sites. The fusion of infrared thermal imaging and visible image data in the target area can help people to identify and locate the fault points of thermal hazards. Among them, a very important step is the registration of thermally visible images. However, the direct registration of images with large-scale differences may lead to large registration errors or even failure. This paper presents a novel two-stage thermal–visible-image registration strategy specifically designed for exceptional scenes, such as a substation. Firstly, the original image pairs that occur after binarization are quickly and roughly registered. Secondly, the adaptive downsampling unit partial-intensity invariant feature descriptor (ADU-PIIFD) algorithm is proposed to correct the small-scale differences in details and achieve finer registration. Experiments are conducted on 30 data sets containing complex power station scenes and compared with several other methods. The results show that the proposed method exhibits an excellent and stable performance in thermal–visible-image registration, and the registration error on the entire data set is within five pixels. Especially for multimodal images with poor image quality and many detailed features, the robustness of the proposed method is far better than that of other methods, which provides a more reliable image registration scheme for the field of fire safety.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Xu, Hong Kui, Ming Yan Jiang, and Ming Qiang Yang. "An Image Registration Method Combing Feature Constraint with Multilevel Strategy." Applied Mechanics and Materials 58-60 (June 2011): 286–91. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.286.

Повний текст джерела
Анотація:
A novel method combing feature constraint with multilevel strategy to improve simultaneously the registration accuracy and speed is proposed for non-parametric image registrations. To images between which the local difference is large, integrating feature constraint constructed with local structure information of images into objective function of image registration improves the registration accuracy. When applying feature constraint under multilevel strategy, parameter searching is prevented from entrapped into local extremum by using the optimization result on coarser levels as the starting points on finer levels; meanwhile traditional optimization methods without demanding intelligent optimization algorithms which consume more time can find the accurate registration parameter on finer levels, so registration speed is improved. Experimental results indicate that this method can finish fast and accurate registration for images between which there exists large local difference.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Si, Zhichao. "Unordered Image Stitching Based on the Registration Index Table." Journal of Physics: Conference Series 2303, no. 1 (July 1, 2022): 012076. http://dx.doi.org/10.1088/1742-6596/2303/1/012076.

Повний текст джерела
Анотація:
Abstract In order to solve the problem of large amount of calculation and time-consuming in the process of unordered image stitching, we propose an image stitching algorithm based on the image registration index table in this paper. Firstly, we construct the matching index sequence based on the similarity of the matching area. Then we search for image matching pairs according to the matching index sequence, carry out image registration and stitch directly, and generate the registration index table. For image stitching, we build a binary tree model assisted by the image registration index table, we need to search for matching pairs and carry out registration and stitching according to the registration index table generated by each layer. Compare to the current research on disordered image stitching, our method significantly reduces the times of feature extraction and matching by at least half, therefore, it greatly improves the efficiency of image stitching.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Lamprinou, Nefeli, Nikolaos Nikolikos, and Emmanouil Z. Psarakis. "Groupwise Image Alignment via Self Quotient Images." Sensors 20, no. 8 (April 19, 2020): 2325. http://dx.doi.org/10.3390/s20082325.

Повний текст джерела
Анотація:
Compared with pairwise registration, the groupwise one is capable of handling a large-scale population of images simultaneously in an unbiased way. In this work we improve upon the state-of-the-art pixel-level, Least-Squares (LS)-based groupwise image registration methods. Specifically, the registration technique is properly adapted by the use of Self Quotient Images (SQI) in order to become capable for solving the groupwise registration of photometrically distorted, partially occluded as well as unimodal and multimodal images. Moreover, the proposed groupwise technique is linear to the cardinality of the image set and thus it can be used for the successful solution of the problem on large image sets with low complexity. From the application of the proposed technique on a series of experiments for the groupwise registration of photometrically and geometrically distorted, partially occluded faces as well as unimodal and multimodal magnetic resonance image sets and its comparison with the Lucas–Kanade Entropy (LKE) algorithm, the obtained results look very promising, in terms of alignment quality, using as figures of merit the mean Peak Signal to Noise Ratio ( m P S N R ) and mean Structural Similarity ( m S S I M ), and computational cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wu, Shu Guang, Shu He, and Xia Yang. "The Application of SIFT Method towards Image Registration." Advanced Materials Research 1044-1045 (October 2014): 1392–96. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1392.

Повний текст джерела
Анотація:
The scale invariant features transform (SIFT) is commonly used in object recognition,According to the problems of large memory consumption and low computation speed in SIFT (Scale Invariant Feature Transform) algorithm.During the image registration methods based on point features,SIFT point feature is invariant to image scale and rotation, and provides robust matching across a substantial range of affine distortion. Experiments show that on the premise that registration accuracy is stable, the proposed algorithm solves the problem of high requirement of memory and the efficiency is improved greatly, which is applicable for registering remote sensing images of large areas.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Zhang, Shuang, Lichun Sui, Rongrong Zhou, Zhangyuan Xun, Chengyan Du, and Xiao Guo. "Mountainous SAR Image Registration Using Image Simulation and an L2E Robust Estimator." Sustainability 14, no. 15 (July 29, 2022): 9315. http://dx.doi.org/10.3390/su14159315.

Повний текст джерела
Анотація:
Synthetic Aperture Radar (SAR) is one of the most widely utilized methods to extract elevation information and identify large-scale deformations in mountainous areas. Homologous points in stereo SAR image pairs are difficult to identify due to complex geometric and radiometric distortions. In this paper, a new approach for mountainous area images is suggested. Firstly, a simulated SAR image and a look-up table based on DEM data are generated by a range-Doppler model and an empirical formula. Then, a point matching RPM-L2E algorithm is used to match images obtained by the simulation and in real-time to indirectly obtain the feature points of the real SAR images. Finally, the accurate registration of mountainous areas in the SAR images is achieved by a polynomial transform. Experimental verification is performed by using the data of mountainous SAR images from the same sensor and different sensors. When the registration accuracy of the method is compared with that of two state-of-the-art image registration algorithms, better outcomes are experimentally shown. The suggested approach can effectively solve the registration problem of SAR images of mountainous areas, and can overcome the disadvantages of poor adaptability and low accuracy of traditional SAR image registration methods for mountainous areas.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Yang, Han, Xiaorun Li, Liaoying Zhao, and Shuhan Chen. "A Novel Coarse-to-Fine Scheme for Remote Sensing Image Registration Based on SIFT and Phase Correlation." Remote Sensing 11, no. 15 (August 6, 2019): 1833. http://dx.doi.org/10.3390/rs11151833.

Повний текст джерела
Анотація:
Automatic image registration has been wildly used in remote sensing applications. However, the feature-based registration method is sometimes inaccurate and unstable for images with large scale difference, grayscale and texture differences. In this manuscript, a coarse-to-fine registration scheme is proposed, which combines the advantage of feature-based registration and phase correlation-based registration. The scheme consists of four steps. First, feature-based registration method is adopted for coarse registration. A geometrical outlier removal method is applied to improve the accuracy of coarse registration, which uses geometric similarities of inliers. Then, the sensed image is modified through the coarse registration result under affine deformation model. After that, the modified sensed image is registered to the reference image by extended phase correlation. Lastly, the final registration results are calculated by the fusion of the coarse registration and the fine registration. High universality of feature-based registration and high accuracy of extended phase correlation-based registration are both preserved in the proposed method. Experimental results of several different remote sensing images, which come from several published image registration papers, demonstrate the high robustness and accuracy of the proposed method. The evaluation contains root mean square error (RMSE), Laplace mean square error (LMSE) and red–green image registration results.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Lu, Junyan, Hongguang Jia, Tie Li, Zhuqiang Li, Jingyu Ma, and Ruifei Zhu. "An Instance Segmentation Based Framework for Large-Sized High-Resolution Remote Sensing Images Registration." Remote Sensing 13, no. 9 (April 23, 2021): 1657. http://dx.doi.org/10.3390/rs13091657.

Повний текст джерела
Анотація:
Feature-based remote sensing image registration methods have achieved great accomplishments. However, they have faced some limitations of applicability, automation, accuracy, efficiency, and robustness for large high-resolution remote sensing image registration. To address the above issues, we propose a novel instance segmentation based registration framework specifically for large-sized high-resolution remote sensing images. First, we design an instance segmentation model based on a convolutional neural network (CNN), which can efficiently extract fine-grained instances as the deep features for local area matching. Then, a feature-based method combined with the instance segmentation results is adopted to acquire more accurate local feature matching. Finally, multi-constraints based on the instance segmentation results are introduced to work on the outlier removal. In the experiments of high-resolution remote sensing image registration, the proposal effectively copes with the circumstance of the sensed image with poor positioning accuracy. In addition, the method achieves superior accuracy and competitive robustness compared with state-of-the-art feature-based methods, while being rather efficient.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Wu, Ang, Yanqiong Shi, Rongsheng Lu, and Zilong Zhang. "Sequence Image Registration for Large Depth of Microscopic Focus Stacking." IEEE Access 8 (2020): 6533–42. http://dx.doi.org/10.1109/access.2019.2963633.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Lombaert, Herve, Leo Grady, Xavier Pennec, Nicholas Ayache, and Farida Cheriet. "Spectral Log-Demons: Diffeomorphic Image Registration with Very Large Deformations." International Journal of Computer Vision 107, no. 3 (December 12, 2013): 254–71. http://dx.doi.org/10.1007/s11263-013-0681-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

WEI, Shunjun, Xinxin TANG, and Xiaoling ZHANG. "Image registration algorithm for InSAR large scenes via DFT model." National Remote Sensing Bulletin 23, no. 5 (2019): 859–70. http://dx.doi.org/10.11834/jrs.20197459.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Scherf, N., J. Einenkel, L. C. Horn, N. Wentzensen, M. Loeffler, J. P. Kuska, and U. D. Braumann. "Large Histological Serial Sections for Computational Tissue Volume Reconstruction." Methods of Information in Medicine 46, no. 05 (2007): 614–22. http://dx.doi.org/10.1160/me9065.

Повний текст джерела
Анотація:
Summary Objectives: A proof of principle study was conducted for microscopic tissue volume reconstructions using a new image processing chain operating on alternately stained large histological serial sections. Methods: Digital histological images were obtained from conventional brightfield transmitted light microscopy. A powerful nonparametric nonlinear optical flow-based registration approach was used. In order to apply a simple but computationally feasible sum-of-squared-differences similarity measure even in case of differing histological stainings, a new consistent tissue segmentation procedure was placed upstream. Results: Two reconstructions from uterine cervix carcinoma specimen were accomplished, one alternately stained with p16INK4a (surrogate tumor marker) and H&amp;E (routine reference), and another with three different alternate stainings, H&amp;E, p16INK4a, and CD3 (a T-lymphocyte marker). For both cases, due to our segmentation-based reference-free nonlinear registration procedure, resulting tissue reconstructions exhibit utmost smooth image-to-image transitions without impairing warpings. Conclusions: Our combination of modern nonparametric nonlinear registration and consistent tissue segmentation has turned out to provide a superior tissue reconstruction quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Zhang, Xuming, Yao Zhou, Peng Qiao, Xiaoning Lv, Jimin Li, Tianyu Du, and Yiming Cai. "Image Registration Algorithm for Remote Sensing Images Based on Pixel Location Information." Remote Sensing 15, no. 2 (January 11, 2023): 436. http://dx.doi.org/10.3390/rs15020436.

Повний текст джерела
Анотація:
Registration between remote sensing images has been a research focus in the field of remote sensing image processing. Most of the existing image registration algorithms applied to feature point matching are derived from image feature extraction methods, such as scale-invariant feature transform (SIFT), speed-up robust features (SURF) and Siamese neural network. Such methods encounter difficulties in achieving accurate image registration where there is a large bias in the image features or no significant feature points. Aiming to solve this problem, this paper proposes an algorithm for multi-source image registration based on geographical location information (GLI). By calculating the geographic location information that corresponds to the pixel in the image, the ideal projected pixel position of the corresponding image is obtained using spatial coordinate transformation. Additionally, the corresponding relationship between the two images is calculated by combining multiple sets of registration points. The simulation experiment illustrates that, under selected common simulation parameters, the average value of the relative registration-point error between the two images is 12.64 pixels, and the registration accuracy of the corresponding ground registration point is higher than 6.5 m. In the registration experiment involving remote sensing images from different sources, the average registration pixel error of this algorithm is 20.92 pixels, and the registration error of the image center is 21.24 pixels. In comparison, the image center registration error given by the convolutional neural network (CNN) is 142.35 pixels after the registration error is manually eliminated. For the registration of homologous and featureless remote sensing images, the SIFT algorithm can only offer one set of registration points for the correct region, and the neural network cannot achieve accurate registration results. The registration accuracy of the presented algorithm is 7.2 pixels, corresponding to a ground registration accuracy of 4.32 m and achieving more accurate registration between featureless images.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Luo, Ziwei, Jing Hu, Xin Wang, Shu Hu, Bin Kong, Youbing Yin, Qi Song, Xi Wu, and Siwei Lyu. "Stochastic Planner-Actor-Critic for Unsupervised Deformable Image Registration." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1917–25. http://dx.doi.org/10.1609/aaai.v36i2.20086.

Повний текст джерела
Анотація:
Large deformations of organs, caused by diverse shapes and nonlinear shape changes, pose a significant challenge for medical image registration. Traditional registration methods need to iteratively optimize an objective function via a specific deformation model along with meticulous parameter tuning, but which have limited capabilities in registering images with large deformations. While deep learning-based methods can learn the complex mapping from input images to their respective deformation field, it is regression-based and is prone to be stuck at local minima, particularly when large deformations are involved. To this end, we present Stochastic Planner-Actor-Critic (spac), a novel reinforcement learning-based framework that performs step-wise registration. The key notion is warping a moving image successively by each time step to finally align to a fixed image. Considering that it is challenging to handle high dimensional continuous action and state spaces in the conventional reinforcement learning (RL) framework, we introduce a new concept `Plan' to the standard Actor-Critic model, which is of low dimension and can facilitate the actor to generate a tractable high dimensional action. The entire framework is based on unsupervised training and operates in an end-to-end manner. We evaluate our method on several 2D and 3D medical image datasets, some of which contain large deformations. Our empirical results highlight that our work achieves consistent, significant gains and outperforms state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Adelman, Zeev, and Leo Joskowicz. "Deformable registration and region-of-interest image reconstruction in sparse repeat CT scanning." Journal of X-Ray Science and Technology 28, no. 6 (December 5, 2020): 1069–89. http://dx.doi.org/10.3233/xst-200706.

Повний текст джерела
Анотація:
BACKGROUND: Repeat CT scanning is ubiquitous in many clinical situations, e.g. to follow disease progression, to evaluate treatment efficacy, and to monitor interventional CT procedures. However, it incurs in cumulative radiation to the patient which can be significantly reduced by using a region of interest (ROI) and the existing baseline scan. OBJECTIVE: To obtain a high-quality reconstruction of a ROI with a significantly reduced X-ray radiation dosage that accounts for deformations. METHODS: We present a new method for deformable registration and image reconstruction inside an ROI in repeat CT scans with a highly reduced X-ray radiation dose based on sparse scanning. Our method uses the existing baseline scan data, a user-defined ROI, and a new sparse repeat scan to compute a high-quality repeat scan ROI image with a significantly reduced radiation dose. Our method first performs rigid registration between the densely scanned baseline and the sparsely scanned repeat CT scans followed by deformable registration with a low-order parametric model, both in 3D Radon space and without reconstructing the repeat scan image. It then reconstructs the repeat scan ROI without computing the entire repeat scan image. RESULTS: Our experimental results on clinical lung and liver CT scans yield a mean × 14 computation speedup and a × 7.6-12.5 radiation dose reduction, with a minor image quality loss of 0.0157 in the NRMSE metric. CONCLUSION: Our method is considerably faster than existing methods, thereby enabling intraoperative online repeat scanning that it is accurate and accounts for position, deformation, and structure changes at a fraction of the radiation dose required by existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Jiang, Ji Tong, Chao Song, and Jun An. "Study on Infrared Thermography Mosaic Algorithm Based on the Feature Point Detection and Registration." Advanced Materials Research 594-597 (November 2012): 1138–42. http://dx.doi.org/10.4028/www.scientific.net/amr.594-597.1138.

Повний текст джерела
Анотація:
When detecting the large objects by infrared thermal method, it is difficult to get a whole panoramic picture. So it needs to stitch some infrared thermography. Image mosaic includes 4 steps, feature detection, feature registration, image transformation and image fusion. This paper studies about an infrared thermograph mosaic algorithm based on the feature point detection and registration, and realizes it in MATLAB.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Luo, Xin, Zuqi Wei, Yuwei Jin, Xiao Wang, Peng Lin, Xufeng Wei, and Wenjian Zhou. "Fast Automatic Registration of UAV Images via Bidirectional Matching." Sensors 23, no. 20 (October 18, 2023): 8566. http://dx.doi.org/10.3390/s23208566.

Повний текст джерела
Анотація:
Image registration plays a vital role in the mosaic process of multiple UAV (Unmanned Aerial Vehicle) images acquired from different spatial positions of the same scene. Aimed at the problem that many fast registration methods cannot provide both high speed and accuracy simultaneously for UAV visible light images, this work proposes a novel registration framework based on a popular baseline registration algorithm, ORB—the Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elemental Features) algorithm. First, the ORB algorithm is utilized to extract image feature points fast. On this basis, two bidirectional matching strategies are presented to match obtained feature points. Then, the PROSRC (Progressive Sample Consensus) algorithm is applied to remove false matches. Finally, the experiments are carried out on UAV image pairs about different scenes including urban, road, building, farmland, and forest. Compared with the original version and other state-of-the-art registration methods, the bi-matching ORB algorithm exhibits higher accuracy and faster speed without any training or prior knowledge. Meanwhile, its complexity is quite low for on-board realization.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zhang, Han, Lin Lei, Weiping Ni, Kenan Cheng, Tao Tang, Peizhong Wang, and Gangyao Kuang. "Registration of Large Optical and SAR Images with Non-Flat Terrain by Investigating Reliable Sparse Correspondences." Remote Sensing 15, no. 18 (September 10, 2023): 4458. http://dx.doi.org/10.3390/rs15184458.

Повний текст джерела
Анотація:
Optical and SAR image registration is the primary procedure to exploit the complementary information from the two different image modal types. Although extensive research has been conducted to narrow down the vast radiometric and geometric gaps so as to extract homogeneous characters for feature point matching, few works have considered the registration issue for non-flat terrains, which will bring in more difficulties for not only sparse feature point matching but also outlier removal and geometric relationship estimation. This article addresses these issues with a novel and effective optical-SAR image registration framework. Firstly, sparse feature points are detected based on the phase congruency moment map of the textureless SAR image (SAR-PC-Moment), which helps to identify salient local regions. Then a template matching process using very large local image patches is conducted, which increases the matching accuracy by a significant margin. Secondly, a mutual verification-based initial outlier removal method is proposed, which takes advantage of the different mechanisms of sparse and dense matching and requires no geometric consistency assumption within the inliers. These two procedures will produce a putative correspondence feature point (CP) set with a low outlier ratio and high reliability. In the third step, the putative CPs are used to segment the large input image of non-flat terrain into dozens of locally flat areas using a recursive random sample consensus (RANSAC) method, with each locally flat area co-registered using an affine transformation. As for the mountainous areas with sharp elevation variations, anchor CPs are first identified, and then optical flow-based pixelwise dense matching is conducted. In the experimental section, ablation studies using four precisely co-registered optical-SAR image pairs of flat terrain quantitatively verify the effectiveness of the proposed SAR-PC-Moment-based feature point detector, big template matching strategy, and mutual verification-based outlier removal method. Registration results on four 1 m-resolution non-flat image pairs prove that the proposed framework is able to produce robust and quite accurate registration results.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії