Добірка наукової літератури з теми "Large baseline image registration"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Large baseline image registration".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Large baseline image registration"

1

Fidler, A., B. Likar, F. Pernus, and U. Skaleric. "Impact of JPEG lossy image compression on quantitative digital subtraction radiography." Dentomaxillofacial Radiology 31, no. 2 (March 2002): 106–12. http://dx.doi.org/10.1038/sj/dmfr/4600670.

Повний текст джерела
Анотація:
OBJECTIVES The aim of the study was to evaluate the impact of JPEG lossy image compression on the estimation of alveolar bone gain by quantitative digital subtraction radiography (DSR). METHODS Nine dry domestic pig mandible posterior segments were radiographed three times ('Baseline', 'No change', and 'Gain') with standardized projection geometry. Bone gain was simulated by adding artificial bone chips (1, 4, and 15 mg). Images were either compressed before or after registration. No change areas in compressed and subtracted 'No change-Baseline' images and bone gain volumes in compressed and subtracted 'Gain-Baseline' images were calculated and compared to the corresponding measurements performed on original subtracted images. RESULTS Measurements of no change areas ('No change-Baseline') were only slightly affected by compressions down to JPEG 50 (J50) applied either before or after registration. Simulated gain of alveolar bone ('Gain-Baseline') was underestimated when compression before registration was performed. The underestimation was bigger when small bone chips of 1 mg were measured and when higher compression rates were used. Bone chips of 4 and 15 mg were only slightly underestimated when using J90, J70, and J50 compressions before registration. CONCLUSIONS Lossy JPEG compression does not affect the measurements of no change areas by DSR. Images undergoing subtraction should be registered before compression and if so, J90 compression with a compression ratio of 1:7 can be used to detect and measure 4 mg and larger bone gain.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sun, Quan, Lei Liu, Zhaodong Niu, Yabo Li, Jingyi Zhang, and Zhuang Wang. "A Practical Star Image Registration Algorithm Using Radial Module and Rotation Angle Features." Remote Sensing 15, no. 21 (October 27, 2023): 5146. http://dx.doi.org/10.3390/rs15215146.

Повний текст джерела
Анотація:
Star image registration is the most important step in the application of astronomical image differencing, stacking, and mosaicking, which requires high robustness, accuracy, and real-time capability on the part of the algorithm. At present, there are no high-performance registration algorithms available in this field. In the present paper, we propose a star image registration algorithm that relies only on radial module features (RMF) and rotation angle features (RAF) while providing excellent robustness, high accuracy, and good real-time performance. The test results on a large amount of simulated and real data show that the comprehensive performance of the proposed algorithm is significantly better than the four classical baseline algorithms as judged by the presence of rotation, insufficient overlapping area, false stars, position deviation, magnitude deviation, and complex sky background, making it a more ideal star image registration algorithm than current alternatives.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Shuxin, Shilei Cao, Dong Wei, Cong Xie, Kai Ma, Liansheng Wang, Deyu Meng, and Yefeng Zheng. "Alternative Baselines for Low-Shot 3D Medical Image Segmentation---An Atlas Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 634–42. http://dx.doi.org/10.1609/aaai.v35i1.16143.

Повний текст джерела
Анотація:
Low-shot (one/few-shot) segmentation has attracted increasing attention as it works well with limited annotation. State-of-the-art low-shot segmentation methods on natural images usually focus on implicit representation learning for each novel class, such as learning prototypes, deriving guidance features via masked average pooling, and segmenting using cosine similarity in feature space. We argue that low-shot segmentation on medical images should step further to explicitly learn dense correspondences between images to utilize the anatomical similarity. The core ideas are inspired by the classical practice of multi-atlas segmentation, where the indispensable parts of atlas-based segmentation, i.e., registration, label propagation, and label fusion are unified into a single framework in our work. Specifically, we propose two alternative baselines, i.e., the Siamese-Baseline and Individual-Difference-Aware Baseline, where the former is targeted at anatomically stable structures (such as brain tissues), and the latter possesses a strong generalization ability to organs suffering large morphological variations (such as abdominal organs). In summary, this work sets up a benchmark for low-shot 3D medical image segmentation and sheds light on further understanding of atlas-based few-shot segmentation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Strittmatter, Anika, Anna Caroli, and Frank G. Zöllner. "A Multistage Rigid-Affine-Deformable Network for Three-Dimensional Multimodal Medical Image Registration." Applied Sciences 13, no. 24 (December 16, 2023): 13298. http://dx.doi.org/10.3390/app132413298.

Повний текст джерела
Анотація:
Multimodal image registration is an important component of medical image processing, allowing the integration of complementary information from various imaging modalities to improve clinical applications like diagnosis and treatment planning. We proposed a novel multistage neural network for three-dimensional multimodal medical image registration, which addresses the challenge of larger rigid deformations commonly present in medical images due to variations in patient positioning in different scanners and rigid anatomical structures. This multistage network combines rigid, affine and deformable transformations in three stages. The network was trained unsupervised with Mutual Information and Gradient L2 loss. We compared the results of our proposed multistage network with a rigid-affine-deformable registration with the classical registration method NiftyReg as a baseline and a multistage network, which combines affine and deformable transformation, as a benchmark. To evaluate the performance of the proposed multistage network, we used four three-dimensional multimodal in vivo datasets: three renal MR datasets consisting of T1-weighted and T2-weighted MR scans and one liver dataset containing CT and T1-weighted MR scans. Experimental results showed that combining rigid, affine and deformable transformations in a multistage network leads to registration results with a high structural similarity, overlap of the corresponding structures (Dice: 76.7 ± 12.5, 61.1 ± 14.0, 64.8 ± 16.2, 68.1 ± 24.6 for the four datasets) and a low level of image folding (|J| ≤ 0: less than or equal to 1.1%), resulting in a medical plausible registration result.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yao, Guobiao, Jin Zhang, Jianya Gong, and Fengxiang Jin. "Automatic Production of Deep Learning Benchmark Dataset for Affine-Invariant Feature Matching." ISPRS International Journal of Geo-Information 12, no. 2 (January 19, 2023): 33. http://dx.doi.org/10.3390/ijgi12020033.

Повний текст джерела
Анотація:
To promote the development of deep learning for feature matching, image registration, and three-dimensional reconstruction, we propose a method of constructing a deep learning benchmark dataset for affine-invariant feature matching. Existing images often have large viewpoint differences and areas with weak texture, which may cause difficulties for image matching, with respect to few matches, uneven distribution, and single matching texture. To solve this problem, we designed an algorithm for the automatic production of a benchmark dataset for affine-invariant feature matching. It combined two complementary algorithms, ASIFT (Affine-SIFT) and LoFTR (Local Feature Transformer), to significantly increase the types of matching patches and the number of matching features and generate quasi-dense matches. Optimized matches with uniform spatial distribution were obtained by the hybrid constraints of the neighborhood distance threshold and maximum information entropy. We applied this algorithm to the automatic construction of a dataset containing 20,000 images: 10,000 ground-based close-range images, 6000 satellite images, and 4000 aerial images. Each image had a resolution of 1024 × 1024 pixels and was composed of 128 pairs of corresponding patches, each with 64 × 64 pixels. Finally, we trained and tested the affine-invariant deep learning model, AffNet, separately on our dataset and the Brown dataset. The experimental results showed that the AffNet trained on our dataset had advantages, with respect to the number of matching points, match correct rate, and matching spatial distribution on stereo images with large viewpoint differences and weak texture. The results verified the effectiveness of the proposed algorithm and the superiority of our dataset. In the future, our dataset will continue to expand, and it is intended to become the most widely used benchmark dataset internationally for the deep learning of wide-baseline image matching.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Schmit, Timothy J., Paul Griffith, Mathew M. Gunshor, Jaime M. Daniels, Steven J. Goodman, and William J. Lebair. "A Closer Look at the ABI on the GOES-R Series." Bulletin of the American Meteorological Society 98, no. 4 (April 1, 2017): 681–98. http://dx.doi.org/10.1175/bams-d-15-00230.1.

Повний текст джерела
Анотація:
Abstract The Advanced Baseline Imager (ABI) on board the Geostationary Operational Environmental Satellite-R (GOES-R) is America’s next-generation geostationary advanced imager. GOES-R launched on 19 November 2016. The ABI is a state-of-the-art 16-band radiometer, with spectral bands covering the visible, near-infrared, and infrared portions of the electromagnetic spectrum. Many attributes of the ABI—such as spectral, spatial, and temporal resolution; radiometrics; and image navigation/registration—are much improved from the current series of GOES imagers. This paper highlights and discusses the expected improvements of each of these attributes. From ABI data many higher-level-derived products can be generated and used in a large number of environmental applications. The ABI’s design allows rapid-scan and contiguous U.S. imaging automatically interleaved with full-disk scanning. In this paper the expected instrument attributes are covered, as they relate to signal-to-noise ratio, image navigation and registration, the various ABI scan modes, and other parameters. There will be several methods for users to acquire GOES-R imagery and products depending on their needs. These include direct reception of the imagery via the satellite downlink and an online-accessible archive. The information from the ABI on the GOES-R series will be used for many applications related to severe weather, tropical cyclones and hurricanes, aviation, natural hazards, the atmosphere, the ocean, and the cryosphere. The ABI on the GOES-R series is America’s next-generation geostationary advanced imager and will dramatically improve the monitoring of many phenomena at finer time and space scales.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Johnson, J. Patrick, Doniel Drazin, Wesley A. King, and Terrence T. Kim. "Image-guided navigation and video-assisted thoracoscopic spine surgery: the second generation." Neurosurgical Focus 36, no. 3 (March 2014): E8. http://dx.doi.org/10.3171/2014.1.focus13532.

Повний текст джерела
Анотація:
Object Video-assisted thoracoscopic surgery (VATS) has evolved for treatment of a variety of spinal disorders. Early incorporation with image-guided surgery (IGS) was challenged due to reproducibility and adaptability, limiting the procedure's acceptance. In the present study, the authors report their experience with second-generation IGS and VATS technologies for anterior thoracic minimally invasive spinal (MIS) procedures. Methods The surgical procedure is described in detail including operating room set-up, patient positioning (a lateral decubitus position), placement of the spinal reference frame and portal, radiographic localization, registration, surgical instruments, and the image-guided thoracoscopic discectomy. Results Combined IGS and VATS procedures were successfully performed and assisted in anatomical localization in 14 patients. The mean patient age was 59 years (range 32–73 years). Disc herniation pathology represented the most common indication for surgery (n = 8 patients); intrathoracic spinal tumors were present in 4 patients and the remaining patients had infection and ossification of the posterior longitudinal ligament. All patients required chest tube drainage postoperatively, and all but 1 patient had drainage discontinued the following day. The only complication was a seroma that was presumed to be due to steroid therapy for postoperative weakness. At the final follow-up, 11 of the patients were improved neurologically, 2 patients had baseline neurological status, and the 1 patient with postoperative weakness was able to ambulate, albeit with an assistive device. The evolution of thoracoscopic surgical procedures occurring over 20 years is presented, including their limitations. The combination of VATS and IGS technologies is discussed including their safety and the importance of 3D imaging. In cases of large open thoracotomy procedures, surgeries require difficult, extensive, and invasive access through the chest cavity; using a MIS procedure can potentially eliminate many of the complications and morbidities associated with large open procedures. The authors report their experience with thoracic spinal surgeries that involved MIS procedures and the new technologies. Conclusions The most significant advance in IGS procedures has resulted from intraoperative CT scanning and automatic registration with the IGS workstation. Image guidance can be used in conjunction with VATS techniques for thoracic discectomy, spinal tumors, infection, and ossification of the posterior longitudinal ligament. The authors' initial experience has revealed this technique to be useful and potentially applicable to other MIS procedures.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wu, Zhenning, Xiaolei Lv, Ye Yun, and Wei Duan. "A Parallel Sequential SBAS Processing Framework Based on Hadoop Distributed Computing." Remote Sensing 16, no. 3 (January 25, 2024): 466. http://dx.doi.org/10.3390/rs16030466.

Повний текст джерела
Анотація:
With the rapid development of microwave remote sensing and SAR satellite systems, the use of InSAR techniques has been greatly encouraged due to the abundance of SAR data with unprecedented temporal and spatial coverage. Small Baseline Subset (SBAS) is a promising time-series InSAR method for applications involving deformation monitoring of the Earth’s crust, and the sequential SBAS method is an extension of SBAS that allows long-term and large-scale surface displacements to be obtained with continuously auto-updating measurement results. As the Chinese LuTan-1 SAR system has begun acquiring massive SAR image data, the need for an efficient and lightweight InSAR processing platform has become urgent in various research fields. However, traditional sequential algorithms are incapable of meeting the huge challenges of low efficiency and frequent human interaction in large-scale InSAR data processing. Therefore, this study proposes a distributed parallel sequential SBAS (P2SBAS) processing chain based on Hadoop by effectively parallelizing and improving the current sequential SBAS method. P2SBAS mainly consists of two components: (1) a distributed SAR data storage platform based on HDFS, which supports efficient inter-node data transfer and continuous online data acquisition, and (2) several parallel InSAR processing algorithms based on the MapReduce model, including image registration, filtering, phase unwrapping, sequential SBAS processing, and so on. By leveraging the capabilities associated with the distributed nature of the Hadoop platform, these algorithms are able to efficiently utilize the segmentation strategy and perform careful boundary processing. These parallelized InSAR algorithm modules can achieve their goals on different nodes in the Hadoop distributed environment, thereby maximizing computing resources and improving the overall performance while comprehensively considering performance and precision. In addition, P2SBAS provides better computing and storage capabilities for small- and medium-sized teams compared to popular InSAR processing approaches based on cloud computing or supercomputing platforms, and it can be easily deployed on clusters thanks to the integration of various existing computing components. Finally, to demonstrate and evaluate the efficiency and accuracy of P2SBAS, we conducted comparative experiments on a set of 32 TerraSAR images of Beijing, China. The results demonstrate that P2SBAS can fully utilize various computing nodes to improve InSAR processing and can be applied well in large-scale LuTan-1 InSAR applications in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Jingyi, Mengqi Han, Yongsen Chen, Bin Wu, Yifan Wu, Weijie Jia, JianMo Liu, et al. "Prediction of recurrent ischaemic stroke using radiomics data and machine learning methods in patients with acute ischaemic stroke: protocol for a multicentre, large sample, prospective observational cohort study in China." BMJ Open 13, no. 10 (October 2023): e076406. http://dx.doi.org/10.1136/bmjopen-2023-076406.

Повний текст джерела
Анотація:
IntroductionStroke is a leading cause of mortality and disability worldwide. Recurrent strokes result in prolonged hospitalisation and worsened functional outcomes compared with the initial stroke. Thus, it is critical to identify patients who are at high risk of stroke recurrence. This study is positioned to develop and validate a prediction model using radiomics data and machine learning methods to identify the risk of stroke recurrence in patients with acute ischaemic stroke (AIS).Methods and analysisA total of 1957 patients with AIS will be needed. Enrolment at participating hospitals will continue until the required sample size is reached, and we will recruit as many participants as possible. Multiple indicators including basic clinical data, image data, laboratory data, CYP2C19 genotype and follow-up data will be assessed at various time points during the registry, including baseline, 24 hours, 7 days, 1 month, 3 months, 6 months, 9 months and 12 months. The primary outcome was stroke recurrence. The secondary outcomes were death events, prognosis scores and adverse events. Imaging images were processed using deep learning algorithms to construct a programme capable of automatically labelling the lesion area and extracting radiomics features. The machine learning algorithms will be applied to integrate cross-scale, multidimensional data for exploratory analysis. Then, an ischaemic stroke recurrence prediction model of the best performance for patients with AIS will be established. Calibration, receiver operating characteristic and decision curve analyses will be evaluated.Ethics and disseminationThis study has received ethical approval from the Medical Ethics Committee of the Second Affiliated Hospital of Nanchang University (medical research review No.34/2021), and informed consent will be obtained voluntarily. The research findings will be disseminated through publication in journals and presented at conferences.Trial registration numberChiCTR2200055209.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

De Backer, Wilfried, Jan De Backer, Ilse Verlinden, Glenn Leemans, Cedric Van Holsbeke, Benjamin Mignot, Martin Jenkins, et al. "Functional respiratory imaging assessment of glycopyrrolate and formoterol fumarate metered dose inhalers formulated using co-suspension delivery technology in patients with COPD." Therapeutic Advances in Respiratory Disease 14 (January 2020): 175346662091699. http://dx.doi.org/10.1177/1753466620916990.

Повний текст джерела
Анотація:
Background: Functional respiratory imaging (FRI) is a quantitative postprocessing imaging technique used to assess changes in the respiratory system. Using FRI, we characterized the effects of the long-acting muscarinic antagonist (LAMA), glycopyrrolate metered dose inhaler (GP MDI), and the long-acting β2-agonist (LABA), formoterol fumarate metered dose inhaler (FF MDI), on airway volume and resistance in patients with moderate-to-severe chronic obstructive pulmonary disease. Methods: Patients in this phase IIIb, randomized, double-blind crossover study received twice-daily GP MDI (18 μg) and FF MDI (9.6 μg). Primary endpoints were specific (i.e. corrected for lobar volume) image-based airway volume (siVaw) and specific image-based airway resistance (siRaw), measured using FRI. Secondary and other endpoints included additional FRI, spirometry, and body plethysmography parameters. Postdose efficacy assessments were performed within 60–150 min of dosing on day 15. Results: A total of 23 patients were randomized and 19 completed both treatment periods. GP MDI and FF MDI both achieved significant improvements from baseline to day 15 in siVaw [11% ( p = 0.0187) and 23% ( p < 0.0001) increases, respectively] and siRaw [25% ( p = 0.0219) and 44% ( p < 0.0001) reductions, respectively]. Although, on average, improvements were larger for FF MDI than GP MDI, some individuals displayed greater responses with each of the two treatments. These within-patient differences increased with airway generation number. Spirometry and body plethysmography endpoints showed significant improvements from baseline in inspiratory capacity for both treatments, and numeric improvements for other endpoints. Conclusion: Both GP MDI and FF MDI significantly improved siRaw and siVaw at day 15 versus baseline. FRI endpoints demonstrated increased sensitivity relative to spirometry and body plethysmography in detecting differences between treatments in a small number of patients. Intra-patient differences in treatment response between the LAMA and the LABA provide further support for the benefit of dual bronchodilator therapies. ClinicalTrials.gov registration number: NCT02937584 The reviews of this paper are available via the supplemental material section.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Large baseline image registration"

1

Elassam, Abdelkarim. "Learning-based vanishing point detection and its application to large-baseline image registration." Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0084.

Повний текст джерела
Анотація:
Cette thèse étudie la détection des points de fuite et de la ligne d'horizon ainsi que leur application à des tâches de localisation visuelle en environnement urbain. La localisation visuelle est un problème fondamental de vision par ordinateur qui vise à déterminer la position et l'orientation d'une caméra dans un environnement en se basant uniquement sur des informations visuelles. En environnements urbains et manufacturés, les points de fuite sont des repères visuels qui apportent des informations importantes sur la structure de la scène et leur détection est donc importante pour les tâches de reconstruction et de localisation. La thèse propose de nouvelles méthodes d'apprentissage profond pour surmonter les limites des approches existantes de détection de points de fuite. La première contribution clé introduit une nouvelle approche pour la détection de lignes d'horizon et de points de fuite. Contrairement à la plupart des méthodes existantes, cette méthode infère simultanément la ligne d'horizon et un nombre illimité de points de fuite horizontaux, même ceux s'étendant au-delà du cadre de l'image. La deuxième contribution clé de cette thèse est un détecteur de points de fuite amélioré par les structures de la scène. Cette méthode utilise un cadre d'apprentissage multitâche pour estimer plusieurs points de fuite horizontaux et générer les masques des structures planaires verticales correspondants à chaque point de fuite. Notre méthode fournit ainsi des informations essentielles sur la configuration de la scène. Contrairement aux méthodes existantes, cette approche exploite les informations contextuelles et les structures de la scène pour une estimation précise sans s'appuyer sur les lignes détectées. Les résultats expérimentaux démontrent que cette méthode surpasse les méthodes traditionnelles basées sur les lignes et les méthodes modernes basées sur l'apprentissage profond. La thèse explore ensuite l'utilisation des points de fuite pour la mise en correspondance et le recalage d'images, en particulier dans le cas où les images sont prises depuis des points de vue très différents. Malgré les progrès continus sur les extracteurs et les descripteurs d'indices, ces méthodes sont souvent inopérantes en présence de fortes variations d'échelle ou de points de vue. Les méthodes proposées relèvent ce défi en incorporant les points de fuite et les structures de la scène. L'un des défis majeurs liés à l'utilisation des points de fuite pour le recalage consiste à établir des correspondances fiables, en particulier dans des scénarios à large base. Ce travail relève ce défi en proposant une méthode de détection de points de fuite aidée par la détection des masques de structures verticales de scène correspondant à ces points de fuite. À notre connaissance, il s'agit de la première implémentation d'une méthode pour la mise en correspondance des points de fuite qui exploite le contenu de l'image et non seulement les segments détectés. Cette correspondance de points de fuite facilite l'estimation de la rotation relative de la caméra, en particulier dans les scénarios à large base. De plus, l'incorporation d'informations des structures de la scène permet une correspondance plus fiable des points clés au sein de ces structures. Par conséquent, la méthode facilite l'estimation de la translation relative, qui est contrainte elle-même par la rotation dérivée des points de fuite. La qualité de la rotation peut cependant parfois être impactée par l'imprécision des points de fuite détectés. Nous proposons donc une méthode de mise en correspondance d'image guidée par les points de fuite, qui est beaucoup moins sensible à la précision de détection des points de fuite
This thesis examines the detection of vanishing points and the horizon line and their application to visual localization tasks in urban environments. Visual localization is a fundamental problem in computer vision that aims to determine the position and orientation of a camera in an environment based solely on visual information. In urban and manufactured environments, vanishing points are important visual landmarks that provide crucial information about the scene's structure, making their detection important for reconstruction and localization tasks. The thesis proposes new deep learning methods to overcome the limitations of existing approaches to vanishing point detection. The first key contribution introduces a novel approach for HL and VP detection. Unlike most existing methods, this method directly infers both the HL and an unlimited number of horizontal VPs, even those extending beyond the image frame. The second key contribution of this thesis is a structure-enhanced VP detector. This method utilizes a multi-task learning framework to estimate multiple horizontal VPs from a single image. It goes beyond simple VP detection by generating masks that identify vertical planar structures corresponding to each VP, providing valuable scene layout information. Unlike existing methods, this approach leverages contextual information and scene structures for accurate estimation without relying on detected lines. Experimental results demonstrate that this method outperforms traditional line-based methods and modern deep learning-based methods. The thesis then explores the use of vanishing points for image matching and registration, particularly in cases where images are captured from vastly different viewpoints. Despite continuous progress in feature extractors and descriptors, these methods often fail in the presence of significant scale or viewpoint variations. The proposed methods address this challenge by incorporating vanishing points and scene structures. One major challenge in using vanishing points for registration is establishing reliable correspondences, especially in large-scale scenarios. This work addresses this challenge by proposing a vanishing point detection method aided by the detection of masks of vertical scene structures corresponding to these vanishing points. To our knowledge, this is the first implementation of a method for vanishing point matching that exploits image content rather than just detected segments. This vanishing point correspondence facilitates the estimation of the camera's relative rotation, particularly in large-scale scenarios. Additionally, incorporating information from scene structures enables more reliable keypoint correspondence within these structures. Consequently, the method facilitates the estimation of relative translation, which is itself constrained by the rotation derived from the vanishing points. The quality of rotation can sometimes be impacted by the imprecision of detected vanishing points. Therefore, we propose a vanishing point-guided image matching method that is much less sensitive to the accuracy of vanishing point detection
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Al-Shahri, Mohammed. "Line Matching in a Wide-Baseline Stereoview." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1376951775.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lakemond, Ruan. "Multiple camera management using wide baseline matching." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/37668/1/Ruan_Lakemond_Thesis.pdf.

Повний текст джерела
Анотація:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shao, Wei. "Identifying the shape collapse problem in large deformation image registration." Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/2276.

Повний текст джерела
Анотація:
This thesis examines and identifies the problems of shape collapse in large deformation image registration. Shape collapse occurs in image registration when a region in the moving image is transformed into a set of near zero volume in the target image space. Shape collapse may occur when the moving image has a structure that is either missing or does not sufficiently overlap the corresponding structure in the target image. We state that shape collapse is a problem in image registration because it may lead to the following consequences: (1) Incorrect pointwise correspondence between different coordinate systems; (2) Incorrect automatic image segmentation; (3) Loss of functional signal. The above three disadvantages of registration with shape collapse are illustrated in detail using several examples with both real and phantom data. Shape collapse problem is common in image registration algorithms with large degrees of freedom such as many diffeomorphic image registration algorithms. This thesis proposes a shape collapse measurement algorithm to detect the regions of shape collapse after image registration in pairwise and group-wise registrations. We further compute the shape collapse for a whole population of pairwise transformations such as occurs when registering many images to a common atlas coordinate system. Experiments are presented using the SyN diffeomorphic image registration algorithm and diffeomorphic demons algorithm. We show that shape collapse exists in both of the two large deformation registration methods. We demonstrate how changing the input parameters to the SyN registration algorithm can mitigate the collapse image registration artifacts.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Eiben, B. "Integration of biomechanical models into image registration in the presence of large deformations." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1476650/.

Повний текст джерела
Анотація:
Prone-to-supine breast image registration has potential application in the fields of surgical and radiotherapy planning, and image guided interventions. However, breast image registration of three-dimensional images acquired in different patient positions is a challenging problem, due to large deformations induced to the soft breast tissue caused by the change in gravity loading. Biomechanical modelling is a promising tool to predict gravity induced deformations, however such simulations alone are unlikely to produce good alignment due to inter-patient variability and image acquisition related influences on the breast shape. This thesis presents a symmetric, biomechanical simulation based registration framework which aligns images in a central, stress-free configuration. Soft tissue is modelled as a neo-Hookean material and external forces are considered as the main source of deformation in the original images. The framework successively applies image derived forces directly into the unloading simulation in place of a subsequent image registration step. This results in a biomechanically constrained deformation. Using a finite difference scheme enables simulations to be performed directly in the image space. Motion constrained boundary conditions have been incorporated which can capture tangential motion of membranes and fasciae. The accuracy of the approach is assessed by measuring the target registration error (TRE) using nine prone MRI and supine CT image pairs, one prone-supine CT image pair, and four prone-supine MRI image pairs. The registration reduced the combined mean TRE for all clinical data sets from initially 69.7mm to 5.6mm. Prone-supine image pairs might not always be available in the clinical breast cancer workflow, especially prior to surgery. Hence an alternative surface driven registration methodology was also developed that incorporates biomechanical simulations, material parameter optimisation, and constrained surface matching. For three prone MR images and corresponding supine CT-derived surfaces a final mean TRE of 10.0mm was measured.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Briand, Thibaud. "Image Formation from a Large Sequence of RAW Images : performance and accuracy." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1017/document.

Повний текст джерела
Анотація:
Le but de cette thèse est de construire une image couleur de haute qualité, contenant un faible niveau de bruit et d'aliasing, à partir d'une grande séquence (e.g. des centaines) d'images RAW prises avec un appareil photo grand public. C’est un problème complexe nécessitant d'effectuer à la volée du dématriçage, du débruitage et de la super-résolution. Les algorithmes existants produisent des images de haute qualité, mais le nombre d'images d'entrée est limité par des coûts de calcul et de mémoire importants. Dans cette thèse, nous proposons un algorithme de fusion d'images qui les traite séquentiellement de sorte que le coût mémoire ne dépend que de la taille de l'image de sortie. Après un pré-traitement, les images mosaïquées sont recalées en utilisant une méthode en deux étapes que nous introduisons. Ensuite, une image couleur est calculée par accumulation des données irrégulièrement échantillonnées en utilisant une régression à noyau classique. Enfin, le flou introduit est supprimé en appliquant l'inverse du filtre équivalent asymptotique correspondant (que nous introduisons). Nous évaluons la performance et la précision de chaque étape de notre algorithme sur des données synthétiques et réelles. Nous montrons que pour une grande séquence d'images, notre méthode augmente avec succès la résolution et le bruit résiduel diminue comme prévu. Nos résultats sont similaires à des méthodes plus lentes et plus gourmandes en mémoire. Comme la génération de données nécessite une méthode d'interpolation, nous étudions également les méthodes d'interpolation par polynôme trigonométrique et B-spline. Nous déduisons de cette étude de nouvelles méthodes d'interpolation affinées
The aim of this thesis is to build a high-quality color image, containing a low level of noise and aliasing, from a large sequence (e.g. hundreds or thousands) of RAW images taken with a consumer camera. This is a challenging issue requiring to perform on the fly demosaicking, denoising and super-resolution. Existing algorithms produce high-quality images but the number of input images is limited by severe computational and memory costs. In this thesis we propose an image fusion algorithm that processes the images sequentially so that the memory cost only depends on the size of the output image. After a preprocessing step, the mosaicked (or CFA) images are aligned in a common system of coordinates using a two-step registration method that we introduce. Then, a color image is computed by accumulation of the irregularly sampled data using classical kernel regression. Finally, the blur introduced is removed by applying the inverse of the corresponding asymptotic equivalent filter (that we introduce).We evaluate the performance and the accuracy of each step of our algorithm on synthetic and real data. We find that for a large sequence of RAW images, our method successfully performs super-resolution and the residual noise decreases as expected. We obtained results similar to those obtained by slower and memory greedy methods. As generating synthetic data requires an interpolation method, we also study in detail the trigonometric polynomial and B-spline interpolation methods. We derive from this study new fine-tuned interpolation methods
Стилі APA, Harvard, Vancouver, ISO та ін.
7

König, Lars [Verfasser]. "Matrix-free approaches for deformable image registration with large-scale and real-time applications in medical imaging / Lars König." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2019. http://d-nb.info/1175137189/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Matos, Ana Carolina Fonseca. "Development of a large baseline stereo vision rig for pedestrian and other target detection on road." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17055.

Повний текст джерела
Анотація:
Mestrado em Engenharia Mecânica
Os veículos autónomos são uma tendência cada vez mais crescente nos dias de hoje com os grandes fabricantes da área automóvel, e não só, concentrados em desenvolver carros autónomos. As duas maiores vantagens que se destacam para os carros autónomos são maior conforto para o condutor e maior segurança, onde este trabalho se foca. São incontáveis as vezes que um condutor, por distração ou por outra razão, não vê um objeto na estrada e colide ou um peão na estrada que e atropelado. Esta e uma das questões que um sistema de apoio a condução (ADAS) ou um carro autónomo tenta solucionar e por ser uma questão tão relevante há cada vez mais investigação nesta área. Um dos sistemas mais usados para este tipo de aplicação são câmaras digitais, que fornecem informação muito completa sobre o meio circundante, para além de sistemas como sensores LIDAR, entre outros. Uma tendência que deriva desta e o uso de sistemas stereo, sistemas com duas câmaras, e neste contexto coloca-se uma pergunta a qual este trabalho tenta respoder: "qual e a distância ideal entre as câmaras num sistema stereo para deteção de objetos ou peões?". Esta tese apresenta todo o desenvolvimento de um sistema de visão stereo: desde o desenvolvimento de todo o software necessário para calcular a que distância estão peões e objetos usando duas câmaras até ao desenvolvimento de um sistema de xação das câmaras que permita o estudo da qualidade da deteção de peões para várias baselines. Foram realizadas experiências para estudar a influênci da baseline e da distância focal da lente que consistriam em gravar imagens com um peão em deslocamento a distâncias pré defenidas e marcadas no chão assim como um objeto xo, tudo em cenário exterior. A análise dos resultados foi feita comparando o valor calculado automáticamente pela aplicação com o valor medido. Conclui-se que com este sistema e com esta aplicação e possível detetar peões com exatidão razoável. No entanto, os melhores resultados foram obtidos para a baseline de 0.3m e para uma lente de 8mm.
Nowadays, autonomous vehicles are an increasing trend as the major players of this sector, and not only, are focused in developing autonomous cars. The two main advantages of autonomous cars are the higher convenience for the passengers and more safety for the passengers and for the people around, which is what this thesis focus on. Sometimes, due to distraction or another reasons, the driver does not see an object on the road and crash or a pedestrian in the cross walk and the person is run over. This is one of the questions that an ADAS or an autonomous car tries to solve and due to the huge relevance of this more research have been done in this area. One of the most applied systems for ADAS are digital cameras, that provide complex information about the surrounding environment, in addition to LIDAR sensor and others. Following this trend, the use of stereo vision systems is increasing - systems with two cameras, and in this context a question comes up: "what is the ideal distance between the cameras in a stereo system for object and pedestrian detection?". This thesis shows all the development of a stereo vision system: from the development of the necessary software for calculating the objects and pedestrians distance form the setup using two cameras, to the design of a xing system for the cameras that allows the study of stereo for di erent baselines. In order to study the in uence of the baseline and the focal distance a pedestrian, walking through previously marked positions, and a xed object, were recorded, in an exterior scenario. The results were analyzed by comparing the automatically calculated distance, using the application, with the real value measured. It was concluded, in the end, that the distance of pedestrians and objects can be calculated, with minimal error, using the software developed and the xing support system. However, the best results were achieved for the 0.3m baseline and for the 8mm lens.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chnafa, Christophe. "Using image-based large-eddy simulations to investigate the intracardiac flow and its turbulent nature." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20112/document.

Повний текст джерела
Анотація:
Le premier objectif de cette thèse est de générer et d'analyser une base de données pour l'écoulement intracardiaque dans des géométries réalistes. Dans ce but, une stratégie couplant simulation numérique et imagerie médicale est appliquée à un cœur gauche pathologique et à un cœur gauche sain. Le second objectif est d'illustrer comment cette base de données peut être analysée afin de mieux comprendre l'écoulement intracardiaque, en portant une attention particulière aux caractéristiques instationnaires de l'écoulement et à sa nature turbulente. Une chaîne numérique pour simuler l'écoulement dans des géométries spécifiques au patient est tout d'abord présentée. La cavité cardiaque et ses mouvements sont extraits à partir d'images médicales à l'aide d'un algorithme de recalage d'image afin d'obtenir le domaine de calcul. Les équations qui régissent l'écoulement sont écrites dans le cadre d'un maillage se déformant au cours du temps (approche arbitrairement Lagrangienne ou Eulérienne). Les valves cardiaques sont modélisées à l'aide de frontières immergées. L'application de cette chaîne numérique à deux cœurs gauches, l'un pathologique, l'autre sain est ensuite détaillée. L'écoulement sanguin est caractérisé par sa nature transitoire, donnant un écoulement complexe et cyclique. Il est montré que l'écoulement n'est ni laminaire, ni pleinement turbulent, justifiant a posteriori l'utilisation de simulation aux grandes échelles. Le développement instationnaire de la turbulence est analysé à l'aide de l'écoulement moyenné sur un nombre suffisant de cycles cardiaques. Les statistiques de l'écoulement, l'énergie turbulente, la production de turbulence et une analyse spectrale sont notamment présentées. Une étude Lagrangienne est aussi effectuée en utilisant des statistiques calculées à l'aide de particules ensemencées dans l'écoulement. En plus des caractéristiques habituellement rapportées, ce travail met en évidence le caractère perturbé et transitoire de l'écoulement, tout en identifiant les mécanismes de production de la turbulence
The first objective of this thesis is to generate and analyse CFD-based databases for the intracardiac flow in realistic geometries. To this aim, an image-based CFD strategy is applied to both a pathological and a healthy human left hearts. The second objective is to illustrate how the numerical database can be analysed in order to gain insight about the intracardiac flow, mainly focusing on the unsteady and turbulent features. A numerical framework allowing insight in fluid dynamics inside patient-specific human hearts is first presented. The heart cavities and their wall dynamics are extracted from medical images, with the help of an image registration algorithm, in order to obtain a patient-specific moving numerical domain. Flow equations are written on a conformal moving computational domain, using an Arbitrary Lagrangian-Eulerian framework. Valves are modelled using immersed boundaries.Application of this framework to compute flow and turbulence statistics in both a realistic pathological and a realistic healthy human left hearts is presented. The blood flow is characterized by its transitional nature, resulting in a complex cyclic flow. Flow dynamics is analysed in order to reveal the main fluid phenomena and to obtain insights into the physiological patterns commonly detected. It is demonstrated that the flow is neither laminar nor fully turbulent, thus justifying a posteriori the use of Large Eddy Simulation.The unsteady development of turbulence is analysed from the phase averaged flow, flow statistics, the turbulent stresses, the turbulent kinetic energy, its production and through spectral analysis. A Lagrangian analysis is also presented using Lagrangian particles to gather statistical flow data. In addition to a number of classically reported features on the left heart flow, this work reveals how disturbed and transitional the flow is and describes the mechanisms of turbulence production
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lotz, Johannes [Verfasser], Jan [Akademischer Betreuer] Modersitzki, and Heinz [Akademischer Betreuer] Handels. "Combined local and global image registration and its application to large-scale images in digital pathology / Johannes Lotz ; Akademische Betreuer: Jan Modersitzki, Heinz Handels." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2020. http://d-nb.info/1217024069/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Large baseline image registration"

1

Schelbert, Heinrich R. Image-Based Measurements of Myocardial Blood Flow. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199392094.003.0024.

Повний текст джерела
Анотація:
Image-based measurements of myocardial blood flow afford the assessment of coronary circulatory function. They reflect functional consequences of coronary stenoses, diffuse epicardial vessel disease and microvascular dysfunction and structural changes and thus provide a measure of the total ischemic burden. Measured flows contain therefore clinically important predictive information. Fundamental to flow measurements are the tissue tracer kinetics, their description through tracer kinetic models, high spatial and temporal resolution imaging devices and accurate extraction of radiotracer tissue concentrations from dynamically acquired images for estimating true flows from the tissue time activity curves. A large body of literature on measurements of myocardial blood flow exists for defining in humans normal values for flow at baseline and during hyperemic stress as well as for the myocardial flow reserve. The role of PET for flow measurements has been well established; initial results with modern SPECT devices are encouraging. Responses of myocardial blood flow to specific challenges like pharmacologic vasodilation and to sympathetic stimulation can uncover functional consequences of focal epicardial coronary stenoses, of conduit vessel disturbances and disease and impairments of microvascular function. Apart from risk stratification, flow measurements may allow detection of early preclinical disease, influence treatment strategies and identify therapy responses.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Large baseline image registration"

1

Risser, Laurent, François-Xavier Vialard, Maria Murgasova, Darryl Holm, and Daniel Rueckert. "Large Deformation Diffeomorphic Registration Using Fine and Coarse Strategies." In Biomedical Image Registration, 186–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14366-3_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Grothausmann, Roman, Dženan Zukić, Matt McCormick, Christian Mühlfeld, and Lars Knudsen. "Enabling Manual Intervention for Otherwise Automated Registration of Large Image Series." In Biomedical Image Registration, 23–33. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50120-4_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Delponte, Elisabetta, Francesco Isgrò, Francesca Odone, and Alessandro Verri. "Large Baseline Matching of Scale Invariant Features." In Image Analysis and Processing – ICIAP 2005, 794–801. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11553595_97.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

He, Jianchun, and Gary E. Christensen. "Large Deformation Inverse Consistent Elastic Image Registration." In Lecture Notes in Computer Science, 438–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45087-0_37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Barrow, Michael, Nelson Ho, Alric Althoff, Peter Tueller, and Ryan Kastner. "Benchmarking Video with the Surgical Image Registration Generator (SIRGn) Baseline." In Advances in Visual Computing, 320–31. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33723-0_26.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhang, Pei, Marc Niethammer, Dinggang Shen, and Pew-Thian Yap. "Large Deformation Diffeomorphic Registration of Diffusion-Weighted Images." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2012, 171–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33418-4_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Medan, Guy, Achia Kronman, and Leo Joskowicz. "Reduced-Dose Patient to Baseline CT Rigid Registration in 3D Radon Space." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2014, 291–98. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10404-1_37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mok, Tony C. W., and Albert C. S. Chung. "Large Deformation Diffeomorphic Image Registration with Laplacian Pyramid Networks." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, 211–21. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59716-0_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Onofrey, John A., Lawrence H. Staib, and Xenophon Papademetris. "Semi-supervised Learning of Nonrigid Deformations for Image Registration." In Medical Computer Vision. Large Data in Medical Imaging, 13–23. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05530-5_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Onofrey, John A., Lawrence H. Staib, and Xenophon Papademetris. "Semi-supervised Learning of Nonrigid Deformations for Image Registration." In Medical Computer Vision. Large Data in Medical Imaging, 13–23. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-14104-6_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Large baseline image registration"

1

Li, Ruizhe, Grazziela Figueredo, Dorothee Auer, Christian Wagner, and Xin Chen. "MrRegNet: Multi-Resolution Mask Guided Convolutional Neural Network for Medical Image Registration with Large Deformations." In 2024 IEEE International Symposium on Biomedical Imaging (ISBI), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/isbi56570.2024.10635510.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Caner, Gulcin, A. Murat Tekalp, Gaurav Sharma, and Wendi Heinzelman. "Multi-View Image Registration for Wide-Baseline Visual Sensor Networks." In 2006 International Conference on Image Processing. IEEE, 2006. http://dx.doi.org/10.1109/icip.2006.313170.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

"LOG-UNBIASED LARGE-DEFORMATION IMAGE REGISTRATION." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2007. http://dx.doi.org/10.5220/0002048202720279.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jing Liu, Marian Chuang, Andrew Chisholm, and Pamela Cosman. "Image registration robust to sparse large errors." In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2015. http://dx.doi.org/10.1109/embc.2015.7318772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Xu, Qing, Gui-sheng Liao, and Ying Liu. "3-D Baseline Error Estimation for Distributed Small Satellites Based on Image Registration." In 2008 Congress on Image and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/cisp.2008.115.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hering, Alessa, and Stefan Heldmann. "Unsupervised learning for large motion thoracic CT follow-up registration." In Image Processing, edited by Elsa D. Angelini and Bennett A. Landman. SPIE, 2019. http://dx.doi.org/10.1117/12.2506962.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mang, Andreas, Amir Gholami, and George Biros. "Distributed-Memory Large Deformation Diffeomorphic 3D Image Registration." In SC16: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2016. http://dx.doi.org/10.1109/sc.2016.71.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chia-Ming Cheng, Shu-Jyuan Lin, Shang-Hong Lai, and Jinn-Cherng Yang. "Improved novel view synthesis from depth image with large baseline." In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761649.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Graf, Laura Franziska, Hanna Siebert, Sven Mischkewitz, Ron Keuth, and Mattias P. Heinrich. "Highly accurate deep registration networks for large deformation estimation in compression ultrasound." In Image Processing, edited by Ivana Išgum and Olivier Colliot. SPIE, 2023. http://dx.doi.org/10.1117/12.2653870.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chicotay, Sarit, Omid E. David, and Nathan S. Netanyahu. "Image Registration of Very Large Images via Genetic Programming." In 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2014. http://dx.doi.org/10.1109/cvprw.2014.56.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Large baseline image registration"

1

Blais-Stevens, A., A. Castagner, A. Grenier, and K D Brewer. Preliminary results from a subbottom profiling survey of Seton Lake, British Columbia. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/332277.

Повний текст джерела
Анотація:
Seton Lake is a freshwater fiord located in southwestern British Columbia, roughly 4 km west of Lillooet and 250 km north-northeast of Vancouver. Located in the Coast Mountains, it is an alpine lake about 22-km long and roughly 1-1.5 km wide. It is separated from nearby Anderson Lake, located to the west, by a large pre-historic rock avalanche deposit at Seton Portage. The lake stands at about 243 m above sea level and is up to about 150 m deep (BC gov., 1953). Water level is controlled by a hydroelectric dam (i.e., Seton dam) located at the eastern end of the lake. Here, the lake drains east into Seton Canal, a 5 km diversion of the flow of the Seton River, which begins at the Seton dam. The Seton Canal pushes water to the Seton Powerhouse, a hydroelectric generating station at the Fraser River, just south of the community of Sekw'el'was and confluence of the Seton River, which drains into the Fraser River at Lillooet. Seton Portage, Shalatlh, South Shalatlh, Tsal'alh (Shalath), Sekw'el'was (Cayoosh Creek), and T'it'q'et (Lillooet) are communities that surround the lake. Surrounded by mountainous terrain, the lake is flanked at mid-slope by glacial and colluvial sediments deposited during the last glacial and deglacial periods (Clague, 1989; Jakob, 2018). The bedrock consists mainly of mafic to ultramafic volcanic rocks with minor carbonate and argillite from the Carboniferous to Middle Jurassic periods (Journeay and Monger, 1994). As part of the Public Safety Geoscience Program at the Geological Survey of Canada (Natural Resources Canada), our goal is to provide baseline geoscience information to nearby communities, stakeholders and decision-makers. Our objective was to see what kind of sediments were deposited and specifically if we could identify underwater landslide deposits. Thus, we surveyed the lake using a Pinger SBP sub bottom profiler made by Knudsen Engineering Ltd., with dual 3.5 / 200 kHz transducers mounted to a small boat (see photo). This instrument transmits sound energy down through the water column that reflects off the lake bottom surface and underlying sediment layers. At the lake surface, the reflected sound energy is received by the profiler, recorded on a laptop computer, and integrated with GPS data. These data are processed to generate a two-dimensional image (or profile) showing the character of the lake bottom and underlying sediments along the route that the boat passed over. Our survey in 2022 recorded 98 profiles along Seton Lake. The red transect lines show the locations of the 20 profiles displayed on the poster. The types of sediments observed are mostly fine-grained glaciolacustrine sediments that are horizontally bedded with a subtle transition between glaciolacustrine to lacustrine (e.g., profiles A-A'; C-C'; F-F'; S-S'). Profile S-S' displays this transition zone. The glaciolacustrine sediments probably were deposited as the Cordilleran Ice Sheet retreated from the local area (~13,000-11,000 years ago; Clague, 2017) and the lacustrine sediments, after the ice receded to present-day conditions. Some of the parallel reflections are interrupted, suggesting abrupt sedimentation by deposits that are not horizontally bedded; these are interpreted as landslide deposits (see pink or blue deposits on profiles). The deposits that show disturbance in the sedimentation found within the horizontal beds are thought to be older landslides (e.g., blue arrows/deposits in profiles C-C'; E-E'; F-F'; G-G'; I-I'; J-J'; K-K'; N-N'; P-P'; Q-Q'; R-R'; T-T'; U-U'), but the ones that are found on top of the horizontally laminated sediments (red arrows/pink deposits), and close to the lake wall, are interpreted to be younger (e.g., profiles B-B'; C-C'; H-H'; K-K'; M-M'; O-O'; P-P'; Q-Q'). At the fan delta just west of Seton dam, where there was no acoustic signal penetration, it is interpreted that the delta failed and brought down coarser deposits at the bottom of the lake (e.g., profiles H-H'; M-M'; and perhaps K-K'). However, these could be glacial deposits, bedrock, or other coarser deposits. Some of the deposits that reflect poor penetration of the acoustic signal, below the glaciolacustrine sediments, could represent glacial deposits, old landslide deposits, or perhaps the presence of gas (orange arrows; e.g, B-B'; D-D'; J-J'; O-O', T-T'). The preliminary results from sub bottom profiling reveal that there are underwater landslides deposits of widely varying ages buried in the bottom of the lake. However, the exact timing of these is not known. Hence our preliminary survey gives an overview of the distribution of landslides where there seems to be a larger number of landslides recorded in the narrower eastern portion of the lake.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії