To see the other types of publications on this topic, follow the link: Image reconstruction.

Journal articles on the topic 'Image reconstruction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image reconstruction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nestor, Adrian, David C. Plaut, and Marlene Behrmann. "Feature-based face representations and image reconstruction from behavioral and neural data." Proceedings of the National Academy of Sciences 113, no. 2 (December 28, 2015): 416–21. http://dx.doi.org/10.1073/pnas.1514551112.

Full text
Abstract:
The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach.
APA, Harvard, Vancouver, ISO, and other styles
2

AMERUDDIN, NUR AMALINA, SULAIMAN MD DOM, and MOHD HAFIZI MAHMUD. "EFFECTS OF SMOOTH, MEDIUM SMOOTH AND MEDIUM RECONSTRUCTION KERNELS ON IMAGE QUALITY IN THREE-PHASE CT OF LIVER." Malaysian Applied Biology 50, no. 2 (November 30, 2021): 145–50. http://dx.doi.org/10.55230/mabjournal.v50i2.1974.

Full text
Abstract:
Reconstruction kernel is one of the parameters that affects the computed tomography (CT) image quality. This study aimed to evaluate the effects of applying three different reconstruction kernels on image quality in 3-phased CT of the liver. A total of 63 CT liver images including normal liver (n = 43) and liver lesion (n = 20) were retrospectively reviewed. Smooth (B20f), medium smooth (B30f) and medium (B40f) reconstruction kernels were employed in the image reconstruction process. Mean attenuation, image noise, and signal-to-noise ratio (SNR) values from each kernel reconstruction were quantified and compared among those kernels using One Way Analysis of Variance (ANOVA) statistical analysis. Significant changes in image noise and SNR were observed in the normal liver (p < 0.001, respectively) following the application of those reconstruction kernels. However, no significant changes in mean attenuation, image noise, and SNR were demonstrated in the liver lesion (p > 0.05). Application of smooth (B20f), medium smooth (B30f), and medium (B40f) kernel reconstructions would significantly affect the image noise and SNR in the normal liver of CT images instead of liver lesions. Hence, proper selection of reconstruction kernel is important in CT images reconstruction to improve precision in diagnostic CT interpretation.
APA, Harvard, Vancouver, ISO, and other styles
3

Kazimierczak, Wojciech, Kamila Kędziora, Joanna Janiszewska-Olszowska, Natalia Kazimierczak, and Zbigniew Serafin. "Noise-Optimized CBCT Imaging of Temporomandibular Joints—The Impact of AI on Image Quality." Journal of Clinical Medicine 13, no. 5 (March 5, 2024): 1502. http://dx.doi.org/10.3390/jcm13051502.

Full text
Abstract:
Background: Temporomandibular joint disorder (TMD) is a common medical condition. Cone beam computed tomography (CBCT) is effective in assessing TMD-related bone changes, but image noise may impair diagnosis. Emerging deep learning reconstruction algorithms (DLRs) could minimize noise and improve CBCT image clarity. This study compares standard and deep learning-enhanced CBCT images for image quality in detecting osteoarthritis-related degeneration in TMJs (temporomandibular joints). This study analyzed CBCT images of patients with suspected temporomandibular joint degenerative joint disease (TMJ DJD). Methods: The DLM reconstructions were performed with ClariCT.AI software. Image quality was evaluated objectively via CNR in target areas and subjectively by two experts using a five-point scale. Both readers also assessed TMJ DJD lesions. The study involved 50 patients with a mean age of 28.29 years. Results: Objective analysis revealed a significantly better image quality in DLM reconstructions (CNR levels; p < 0.001). Subjective assessment showed high inter-reader agreement (κ = 0.805) but no significant difference in image quality between the reconstruction types (p = 0.055). Lesion counts were not significantly correlated with the reconstruction type (p > 0.05). Conclusions: The analyzed DLM reconstruction notably enhanced the objective image quality in TMJ CBCT images but did not significantly alter the subjective quality or DJD lesion diagnosis. However, the readers favored DLM images, indicating the potential for better TMD diagnosis with CBCT, meriting more study.
APA, Harvard, Vancouver, ISO, and other styles
4

Bae, Joungeun, and Hoon Yoo. "Image Enhancement for Computational Integral Imaging Reconstruction via Four-Dimensional Image Structure." Sensors 20, no. 17 (August 25, 2020): 4795. http://dx.doi.org/10.3390/s20174795.

Full text
Abstract:
This paper describes the image enhancement of a computational integral imaging reconstruction method via reconstructing a four-dimensional (4-D) image structure. A computational reconstruction method for high-resolution three-dimensional (3-D) images is highly required in 3-D applications such as 3-D visualization and 3-D object recognition. To improve the visual quality of reconstructed images, we introduce an adjustable parameter to produce a group of 3-D images from a single elemental image array. The adjustable parameter controls overlapping in back projection with a transformation of cropping and translating elemental images. It turns out that the new parameter is an independent parameter from the reconstruction position to reconstruct a 4-D image structure with four axes of x, y, z, and k. The 4-D image structure of the proposed method provides more visual information than existing methods. Computer simulations and optical experiments are carried out to show the feasibility of the proposed method. The results indicate that our method enhances the image quality of 3-D images by providing a 4-D image structure with the adjustable parameter.
APA, Harvard, Vancouver, ISO, and other styles
5

Wen, Mingyun, and Kyungeun Cho. "Object-Aware 3D Scene Reconstruction from Single 2D Images of Indoor Scenes." Mathematics 11, no. 2 (January 12, 2023): 403. http://dx.doi.org/10.3390/math11020403.

Full text
Abstract:
Recent studies have shown that deep learning achieves excellent performance in reconstructing 3D scenes from multiview images or videos. However, these reconstructions do not provide the identities of objects, and object identification is necessary for a scene to be functional in virtual reality or interactive applications. The objects in a scene reconstructed as one mesh are treated as a single object, rather than individual entities that can be interacted with or manipulated. Reconstructing an object-aware 3D scene from a single 2D image is challenging because the image conversion process from a 3D scene to a 2D image is irreversible, and the projection from 3D to 2D reduces a dimension. To alleviate the effects of dimension reduction, we proposed a module to generate depth features that can aid the 3D pose estimation of objects. Additionally, we developed a novel approach to mesh reconstruction that combines two decoders that estimate 3D shapes with different shape representations. By leveraging the principles of multitask learning, our approach demonstrated superior performance in generating complete meshes compared to methods relying solely on implicit representation-based mesh reconstruction networks (e.g., local deep implicit functions), as well as producing more accurate shapes compared to previous approaches for mesh reconstruction from single images (e.g., topology modification networks). The proposed method was evaluated on real-world datasets. The results showed that it could effectively improve the object-aware 3D scene reconstruction performance over existing methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Niu, Xiaomei. "Interactive 3D reconstruction method of fuzzy static images in social media." Journal of Intelligent Systems 31, no. 1 (January 1, 2022): 806–16. http://dx.doi.org/10.1515/jisys-2022-0049.

Full text
Abstract:
Abstract Because the traditional social media fuzzy static image interactive three-dimensional (3D) reconstruction method has the problem of poor reconstruction completeness and long reconstruction time, the social media fuzzy static image interactive 3D reconstruction method is proposed. For preprocessing the fuzzy static image of social media, the Harris corner detection method is used to extract the feature points of the preprocessed fuzzy static image of social media. According to the extraction results, the parameter estimation algorithm of contrast divergence is used to learn the restricted Boltzmann machine (RBM) network model, and the RBM network model is divided into input, output, and hidden layers. By combining the RBM-based joint dictionary learning method and a sparse representation model, an interactive 3D reconstruction of fuzzy static images in social media is achieved. Experimental results based on the CAD software show that the proposed method has a reconstruction completeness of above 95% and the reconstruction time is less than 15 s, improving the completeness and efficiency of the reconstruction, effectively reconstructing the fuzzy static images in social media, and increasing the sense of reality of social media images.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Xueyan, Limei Zhang, Yining Zhang, and Lishan Qiao. "A Photoacoustic Imaging Algorithm Based on Regularized Smoothed L0 Norm Minimization." Molecular Imaging 2021 (June 1, 2021): 1–13. http://dx.doi.org/10.1155/2021/6689194.

Full text
Abstract:
The recently emerging technique of sparse reconstruction has received much attention in the field of photoacoustic imaging (PAI). Compressed sensing (CS) has large potential in efficiently reconstructing high-quality PAI images with sparse sampling signal. In this article, we propose a CS-based error-tolerant regularized smooth L0 (ReSL0) algorithm for PAI image reconstruction, which has the same computational advantages as the SL0 algorithm while having a higher degree of immunity to inaccuracy caused by noise. In order to evaluate the performance of the ReSL0 algorithm, we reconstruct the simulated dataset obtained from three phantoms. In addition, a real experimental dataset from agar phantom is also used to verify the effectiveness of the ReSL0 algorithm. Compared to three L0 norm, L1 norm, and TV norm-based CS algorithms for signal recovery and image reconstruction, experiments demonstrated that the ReSL0 algorithm provides a good balance between the quality and efficiency of reconstructions. Furthermore, the PSNR of the reconstructed image calculated by the introduced method was better than the other three methods. In particular, it can notably improve reconstruction quality in the case of noisy measurement.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Xuan, Lijun Sun, Abdellah Chehri, and Yongchao Song. "A Review of GAN-Based Super-Resolution Reconstruction for Optical Remote Sensing Images." Remote Sensing 15, no. 20 (October 21, 2023): 5062. http://dx.doi.org/10.3390/rs15205062.

Full text
Abstract:
High-resolution images have a wide range of applications in image compression, remote sensing, medical imaging, public safety, and other fields. The primary objective of super-resolution reconstruction of images is to reconstruct a given low-resolution image into a corresponding high-resolution image by a specific algorithm. With the emergence and swift advancement of generative adversarial networks (GANs), image super-resolution reconstruction is experiencing a new era of progress. Unfortunately, there has been a lack of comprehensive efforts to bring together the advancements made in the field of super-resolution reconstruction using generative adversarial networks. Hence, this paper presents a comprehensive overview of the super-resolution image reconstruction technique that utilizes generative adversarial networks. Initially, we examine the operational principles of generative adversarial networks, followed by an overview of the relevant research and background information on reconstructing remote sensing images through super-resolution techniques. Next, we discuss significant research on generative adversarial networks in high-resolution image reconstruction. We cover various aspects, such as datasets, evaluation criteria, and conventional models used for image reconstruction. Subsequently, the super-resolution reconstruction models based on generative adversarial networks are categorized based on whether the kernel blurring function is recognized and utilized during training. We provide a brief overview of the utilization of generative adversarial network models in analyzing remote sensing imagery. In conclusion, we present a prospective analysis of forthcoming research directions pertaining to super-resolution reconstruction methods that rely on generative adversarial networks.
APA, Harvard, Vancouver, ISO, and other styles
9

Xiang-Yun Yi, Xiang-Yun Yi, Xiao-Bo Dong Xiang-Yun Yi, Liang-Gui Zhang Xiao-Bo Dong, Yan-Chao Sun Liang-Gui Zhang, Wen-Tao Li Yan-Chao Sun, and Tao Zhang Wen-Tao Li. "Compressive Perception Image Reconstruction Technology for Basic Mixed Sparse Basis in Metal Surface Detection." 電腦學刊 35, no. 1 (February 2024): 159–65. http://dx.doi.org/10.53106/199115992024023501011.

Full text
Abstract:
<p>Applying Compressed Sensing (CS) technology to robot vision image transmission, an effective method for image reconstruction in robot imaging is proposed to improve the accuracy of reconstruction. Reconstructing images using a mixed sparse representation of DCT and circularly symmetric contour wave transform, the basic algorithm used is the Smoothed Projection Landweber (SPL) algorithm, which optimizes the coefficients under different sparse transformations by incorporating hard thresholding and binary thresholding methods for different sparse bases during iterations. The experiment shows that compared with single sparse base image reconstruction, the proposed reconstruction method has improved reconstruction accuracy.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Seetharamaswamy, Shashi Kiran, and Suresh Kaggere Veeranna. "Image reconstruction through compressive sampling matching pursuit and curvelet transform." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 6 (December 1, 2023): 6277. http://dx.doi.org/10.11591/ijece.v13i6.pp6277-6284.

Full text
Abstract:
An interesting area of research is image reconstruction, which uses algorithms and techniques to transform a degraded image into a good one. The quality of the reconstructed image plays a vital role in the field of image processing. Compressive Sampling is an innovative and rapidly growing method for reconstructing signals. It is extensively used in image reconstruction. The literature uses a variety of matching pursuits for image reconstruction. In this paper, we propose a modified method named compressive sampling matching pursuit (CoSaMP) for image reconstruction that promises to sample sparse signals from far fewer observations than the signal’s dimension. The main advantage of CoSaMP is that it has an excellent theoretical guarantee for convergence. The proposed technique combines CoSaMP with curvelet transform for better reconstruction of image. Experiments are carried out to evaluate the proposed technique on different test images. The results indicate that qualitative and quantitative performance is better compared to existing methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Ahmad, Bilal, Pål Anders Floor, Ivar Farup, and Casper Find Andersen. "Single-Image-Based 3D Reconstruction of Endoscopic Images." Journal of Imaging 10, no. 4 (March 28, 2024): 82. http://dx.doi.org/10.3390/jimaging10040082.

Full text
Abstract:
A wireless capsule endoscope (WCE) is a medical device designed for the examination of the human gastrointestinal (GI) tract. Three-dimensional models based on WCE images can assist in diagnostics by effectively detecting pathology. These 3D models provide gastroenterologists with improved visualization, particularly in areas of specific interest. However, the constraints of WCE, such as lack of controllability, and requiring expensive equipment for operation, which is often unavailable, pose significant challenges when it comes to conducting comprehensive experiments aimed at evaluating the quality of 3D reconstruction from WCE images. In this paper, we employ a single-image-based 3D reconstruction method on an artificial colon captured with an endoscope that behaves like WCE. The shape from shading (SFS) algorithm can reconstruct the 3D shape using a single image. Therefore, it has been employed to reconstruct the 3D shapes of the colon images. The camera of the endoscope has also been subjected to comprehensive geometric and radiometric calibration. Experiments are conducted on well-defined primitive objects to assess the method’s robustness and accuracy. This evaluation involves comparing the reconstructed 3D shapes of primitives with ground truth data, quantified through measurements of root-mean-square error and maximum error. Afterward, the same methodology is applied to recover the geometry of the colon. The results demonstrate that our approach is capable of reconstructing the geometry of the colon captured with a camera with an unknown imaging pipeline and significant noise in the images. The same procedure is applied on WCE images for the purpose of 3D reconstruction. Preliminary results are subsequently generated to illustrate the applicability of our method for reconstructing 3D models from WCE images.
APA, Harvard, Vancouver, ISO, and other styles
12

Rizvi, Saad, Jie Cao, Kaiyu Zhang, and Qun Hao. "Improving Imaging Quality of Real-time Fourier Single-pixel Imaging via Deep Learning." Sensors 19, no. 19 (September 27, 2019): 4190. http://dx.doi.org/10.3390/s19194190.

Full text
Abstract:
Fourier single pixel imaging (FSPI) is well known for reconstructing high quality images but only at the cost of long imaging time. For real-time applications, FSPI relies on under-sampled reconstructions, failing to provide high quality images. In order to improve imaging quality of real-time FSPI, a fast image reconstruction framework based on deep learning (DL) is proposed. More specifically, a deep convolutional autoencoder network with symmetric skip connection architecture for real time 96 × 96 imaging at very low sampling rates (5–8%) is employed. The network is trained on a large image set and is able to reconstruct diverse images unseen during training. The promising experimental results show that the proposed FSPI coupled with DL (termed DL-FSPI) outperforms conventional FSPI in terms of image quality at very low sampling rates.
APA, Harvard, Vancouver, ISO, and other styles
13

Koolstra, Kirsten, Thomas O’Reilly, Peter Börnert, and Andrew Webb. "Image distortion correction for MRI in low field permanent magnet systems with strong B0 inhomogeneity and gradient field nonlinearities." Magnetic Resonance Materials in Physics, Biology and Medicine 34, no. 4 (January 27, 2021): 631–42. http://dx.doi.org/10.1007/s10334-021-00907-2.

Full text
Abstract:
Abstract Objective To correct for image distortions produced by standard Fourier reconstruction techniques on low field permanent magnet MRI systems with strong $${B}_{0}$$ B 0 inhomogeneity and gradient field nonlinearities. Materials and methods Conventional image distortion correction algorithms require accurate $${\Delta B}_{0}$$ Δ B 0 maps which are not possible to acquire directly when the $${B}_{0}$$ B 0 inhomogeneities also produce significant image distortions. Here we use a readout gradient time-shift in a TSE sequence to encode the $${B}_{0}$$ B 0 field inhomogeneities in the k-space signals. Using a non-shifted and a shifted acquisition as input, $$\Delta {B}_{0}$$ Δ B 0 maps and images were reconstructed in an iterative manner. In each iteration, $$\Delta {B}_{0}$$ Δ B 0 maps were reconstructed from the phase difference using Tikhonov regularization, while images were reconstructed using either conjugate phase reconstruction (CPR) or model-based (MB) image reconstruction, taking the reconstructed field map into account. MB reconstructions were, furthermore, combined with compressed sensing (CS) to show the flexibility of this approach towards undersampling. These methods were compared to the standard fast Fourier transform (FFT) image reconstruction approach in simulations and measurements. Distortions due to gradient nonlinearities were corrected in CPR and MB using simulated gradient maps. Results Simulation results show that for moderate field inhomogeneities and gradient nonlinearities, $$\Delta {B}_{0}$$ Δ B 0 maps and images reconstructed using iterative CPR result in comparable quality to that for iterative MB reconstructions. However, for stronger inhomogeneities, iterative MB reconstruction outperforms iterative CPR in terms of signal intensity correction. Combining MB with CS, similar image and $$\Delta {B}_{0}$$ Δ B 0 map quality can be obtained without a scan time penalty. These findings were confirmed by experimental results. Discussion In case of $${B}_{0}$$ B 0 inhomogeneities in the order of kHz, iterative MB reconstructions can help to improve both image quality and $$\Delta {B}_{0}$$ Δ B 0 map estimation.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Rongfang, Yali Qin, Zhenbiao Wang, and Huan Zheng. "Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization." Electronics 11, no. 2 (January 7, 2022): 182. http://dx.doi.org/10.3390/electronics11020182.

Full text
Abstract:
Achieving high-quality reconstructions of images is the focus of research in image compressed sensing. Group sparse representation improves the quality of reconstructed images by exploiting the non-local similarity of images; however, block-matching and dictionary learning in the image group construction process leads to a long reconstruction time and artifacts in the reconstructed images. To solve the above problems, a joint regularized image reconstruction model based on group sparse representation (GSR-JR) is proposed. A group sparse coefficients regularization term ensures the sparsity of the group coefficients and reduces the complexity of the model. The group sparse residual regularization term introduces the prior information of the image to improve the quality of the reconstructed image. The alternating direction multiplier method and iterative thresholding algorithm are applied to solve the optimization problem. Simulation experiments confirm that the optimized GSR-JR model is superior to other advanced image reconstruction models in reconstructed image quality and visual effects. When the sensing rate is 0.1, compared to the group sparse residual constraint with a nonlocal prior (GSRC-NLR) model, the gain of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) is up to 4.86 dB and 0.1189, respectively.
APA, Harvard, Vancouver, ISO, and other styles
15

López-Montes, Alejandro, Pablo Galve, José Manuel Udias, Jacobo Cal-González, Juan José Vaquero, Manuel Desco, and Joaquín L. Herraiz. "Real-Time 3D PET Image with Pseudoinverse Reconstruction." Applied Sciences 10, no. 8 (April 19, 2020): 2829. http://dx.doi.org/10.3390/app10082829.

Full text
Abstract:
Real-time positron emission tomography (PET) may provide information from first-shot images, enable PET-guided biopsies, and allow awake animal studies. Fully-3D iterative reconstructions yield the best images in PET, but they are too slow for real-time imaging. Analytical methods such as Fourier back projection (FBP) are very fast, but yield images of poor quality with artifacts due to noise or data incompleteness. In this work, an image reconstruction based on the pseudoinverse of the system response matrix (SRM) is presented. w. To implement the pseudoinverse method, the reconstruction problem is separated into two stages. First, the axial part of the SRM is pseudo-inverted (PINV) to rebin the 3D data into 2D datasets. Then, the resulting 2D slices can be reconstructed with analytical methods or by applying the pseudoinverse algorithm again. The proposed two-step PINV reconstruction yielded good-quality images at a rate of several frames per second, compatible with real time applications. Furthermore, extremely fast direct PINV reconstruction of projections of the 3D image collapsed along specific directions can be implemented.
APA, Harvard, Vancouver, ISO, and other styles
16

FAN, YI, HONGBING LU, CHONGYANG HAO, ZHENGRONG LIANG, and ZHIMING ZHOU. "FAST ANALYTICAL RECONSTRUCTION OF GATED CARDIAC SPECT WITH NON-UNIFORM ATTENUATION COMPENSATION." International Journal of Image and Graphics 07, no. 01 (January 2007): 87–104. http://dx.doi.org/10.1142/s0219467807002556.

Full text
Abstract:
Conventionally, the inverse problem of gated cardiac SPECT is solved by reconstructing the images frame-by-frame, ignoring the inter-frame correlation along the time dimension. To compensate for the non-uniform attenuation for quantitative cardiac imaging, iterative image reconstruction has been a choice which could utilize a priori constraint on the inter-frame correlation for a penalized maximum likelihood (ML) solution. However, iterative image reconstruction in the 4D space involves intensive computations. In this paper, an efficient method for 4D gated SPECT reconstruction is developed based on Karhune-Loève (KL) transform and Novikov's inverse formula. The temporal KL transform is first applied on the data sequence to de-correlate the inter-frame correlation and then the 3D principal components in the KL domain are reconstructed frame-by-frame using Novikov's inverse formula with non-uniform attenuation compensation. Finally an inverse KL transform is performed to obtain quantitatively-reconstructed 4D images in the original space. With the proposed method, 4D reconstruction can be achieved at a reasonable computational cost. The results from computer simulations are very encouraging as compared to conventional frame-by-frame filtered back-projection and iterative ordered-subsets ML reconstructions. By discarding high-order KL components for further noise reduction, the computation time could be further reduced.
APA, Harvard, Vancouver, ISO, and other styles
17

Lau, Benjamin K. F., Tess Reynolds, Paul J. Keall, Jan-Jakob Sonke, Shalini K. Vinod, Owen Dillon, and Ricky T. O’Brien. "Reducing 4DCBCT imaging dose and time: exploring the limits of adaptive acquisition and motion compensated reconstruction." Physics in Medicine & Biology 67, no. 6 (March 7, 2022): 065002. http://dx.doi.org/10.1088/1361-6560/ac55a4.

Full text
Abstract:
Abstract This study investigates the dose and time limits of adaptive 4DCBCT acquisitions (adaptive-acquisition) compared with current conventional 4DCBCT acquisition (conventional-acquisition). We investigate adaptive-acquisitions as low as 60 projections (∼25 s scan, 6 projections per respiratory phase) in conjunction with emerging image reconstruction methods. 4DCBCT images from 20 patients recruited into the adaptive CT acquisition for personalized thoracic imaging clinical study (NCT04070586) were resampled to simulate faster and lower imaging dose acquisitions. All acquisitions were reconstructed using Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), motion compensated FDK (MCFDK), motion compensated MKB (MCMKB) and simultaneous motion estimation and image reconstruction (SMEIR) algorithms. All reconstructions were compared against conventional-acquisition 4DFDK-reconstruction using Structural SIMilarity Index (SSIM), signal-to-noise ratio (SNR), contrast-to-noise-ratio (CNR), tissue interface sharpness diaphragm (TIS-D), tissue interface sharpness tumor (TIS-T) and center of mass trajectory (COMT) for difference in diaphragm and tumor motion. All reconstruction methods using 110-projection adaptive-acquisition (11 projections per respiratory phase) had a SSIM of greater than 0.92 relative to conventional-acquisition 4DFDK-reconstruction. Relative to conventional-acquisition 4DFDK-reconstruction, 110-projection adaptive-acquisition MCFDK-reconstructions images had 60% higher SNR, 10% higher CNR, 30% higher TIS-T and 45% higher TIS-D on average. The 110-projection adaptive-acquisition SMEIR-reconstruction images had 123% higher SNR, 90% higher CNR, 96% higher TIS-T and 60% higher TIS-D on average. The difference in diaphragm and tumor motion compared to conventional-acquisition 4DFDK-reconstruction was within submillimeter accuracy for all acquisition reconstruction methods. Adaptive-acquisitions resulted in faster scans with lower imaging dose and equivalent or improved image quality compared to conventional-acquisition. Adaptive-acquisition with motion compensated-reconstruction enabled scans with as low as 110 projections to deliver acceptable image quality. This translates into 92% lower imaging dose and 80% less scan time than conventional-acquisition.
APA, Harvard, Vancouver, ISO, and other styles
18

Boag, A. H., L. A. Kennedy, and M. J. Miller. "Three-Dimensional Microscopic Image Reconstruction of Prostatic Adenocarcinoma." Archives of Pathology & Laboratory Medicine 125, no. 4 (April 1, 2001): 562–66. http://dx.doi.org/10.5858/2001-125-0562-tdmiro.

Full text
Abstract:
Abstract Context.—Routine microscopy provides only a 2-dimensional view of the complex 3-dimensional structure that makes up human tissue. Three-dimensional microscopic image reconstruction has not been described previously for prostate cancer. Objectives.—To develop a simple method of computerized 3-dimensional image reconstruction and to demonstrate its applicability to the study of prostatic adenocarcinoma. Methods.—Serial sections were cut from archival paraffin-embedded prostate specimens, immunostained using antikeratin CAM5.2, and digitally imaged. Computer image–rendering software was used to produce 3-dimensional image reconstructions of prostate cancer of varying Gleason grades, normal prostate, and prostatic intraepithelial neoplasia. Results.—The rendering system proved easy to use and provided good-quality 3-dimensional images of most specimens. Normal prostate glands formed irregular fusiform structures branching off central tubular ducts. Prostatic intraepithelial neoplasia showed external contours similar to those of normal glands, but with a markedly complex internal arrangement of branching lumens. Gleason grade 3 carcinoma was found to consist of a complex array of interconnecting tubules rather than the apparently separate glands seen in 2 dimensions on routine light microscopy. Gleason grade 4 carcinoma demonstrated a characteristic form of glandular fusion that was readily visualized by optically sectioning and rotating the reconstructed images. Conclusions.—Computerized 3-dimensional microscopic imaging holds great promise as an investigational tool. By revealing the structural relationships of the various Gleason grades of prostate cancer, this method could be used to refine diagnostic and grading criteria for this common tumor.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Jian, Yong Zhang, Yuanlin Yu, and Ning Zhong. "Nested U-Net Architecture Based Image Segmentation for 3D Neuron Reconstruction." Journal of Medical Imaging and Health Informatics 11, no. 5 (May 1, 2021): 1348–56. http://dx.doi.org/10.1166/jmihi.2021.3379.

Full text
Abstract:
Digital reconstruction of neurons is a critical step in studying neuronal morphology and exploring the working mechanism of the brain. In recent years, the focus of neuronal morphology reconstruction has gradually shifted from single neurons to multiple neurons in a whole brain. Microscopic images of a whole brain often have low signal-to-noise-ratio, discontinuous neuron fragments or weak neuron signals. It is very difficult to segment neuronal signals from the background of these images, which is the first step of most automatic reconstruction algorithms. In this study, we propose a Nested U-Net based Ultra-Tracer model (NUNU-Tracer) for better multiple neurons image segmentation and morphology reconstruction. The NUNU-Tracer utilizes nested U-Net (UNet++) deep network to segment 3D neuron images, reconstructs neuron morphologies under the framework of the Ultra-Tracer and prunes branches of noncurrent tracing neurons. The 3D UNet++ takes a 3D microscopic image as its input, and uses scale-space distance transform and linear fusion strategy to generate the segmentation maps for voxels in the image. It is capable of removing noise, repairing broken neurite patterns and enhancing neuronal signals. We evaluate the performance of the 3D UNet++ for image segmentation and NUNU-Tracer for neuron morphology reconstruction on image blocks and neurons, respectively. Experimental results show that they significantly improve the accuracy and length of neuron reconstructions.
APA, Harvard, Vancouver, ISO, and other styles
20

Anderson, M. D., F. Baron, and M. C. Bentz. "TLDR: time lag/delay reconstructor." Monthly Notices of the Royal Astronomical Society 505, no. 2 (May 19, 2021): 2903–12. http://dx.doi.org/10.1093/mnras/stab1394.

Full text
Abstract:
ABSTRACT We present the time lag/delay reconstructor (TLDR), an algorithm for reconstructing velocity delay maps in the maximum a posteriori framework for reverberation mapping. Reverberation mapping is a tomographical method for studying the kinematics and geometry of the broad-line region of active galactic nuclei at high spatial resolution. Leveraging modern image reconstruction techniques, including total variation and compressed sensing, TLDR applies multiple regularization schemes to reconstruct velocity delay maps using the alternating direction method of multipliers. Along with the detailed description of the TLDR algorithm we present test reconstructions from TLDR applied to synthetic reverberation mapping spectra as well as a preliminary reconstruction of the Hβ feature of Arp 151 from the 2008 Lick Active Galactic Nuclei Monitoring Project.
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, L. Ravi, K. G. S. Venkatesan, and S.Ravichandran. "Cloud-enabled Internet of Things Medical Image Processing Compressed Sensing Reconstruction." International Journal of Scientific Methods in Intelligence Engineering Networks 01, no. 04 (2023): 11–21. http://dx.doi.org/10.58599/ijsmien.2023.1402.

Full text
Abstract:
Deep learning compresses medical image processing in IoMT. CS-MRI acquires quickly. It has various medicinal uses due to its advantages. This lowers motion artifacts and contrast washout. Reduces patient pressure and scanning costs. CS-MRI avoids the Nyquist-Shannon sampling barrier. Parallel imagingbased fast MRI uses many coils to reconstruct MRI images with less raw data. Parallel imaging enables rapid MRI. This research developed a deep learning-based method for reconstructing CS-MRI images that bridges the gap between typical non-learning algorithms that employ data from a single image and enormous training datasets. Conventional approaches only reconstruct CS-MRI data from one picture. Reconstructing CS-MRI images. CS-GAN is recommended for CS-MRI reconstruction. For success. Refinement learning stabilizes our C-GAN-based generator, which eliminates aliasing artifacts. This improved newly produced data. Product quality increased. Adversarial and information loss recreated the picture. We should protect the image’s texture and edges. Picture and frequency domain data establish consistency. We want frequency and picture domain information to match. It offers visual domain data. Traditional CS-MRI reconstruction and deep learning were used in our broad comparison research. C-GAN enhances reconstruction while conserving perceptual visual information. MRI image reconstruction takes 5 milliseconds, allowing real-time analysis.
APA, Harvard, Vancouver, ISO, and other styles
22

Defrise, Michel, and Grant T. Gullberg. "Image reconstruction." Physics in Medicine and Biology 51, no. 13 (June 20, 2006): R139—R154. http://dx.doi.org/10.1088/0031-9155/51/13/r09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

A. Omer, Osama. "Reconstruction of High-Resolution Computed Tomography Image in Sinogram Space." International Journal of Mathematics and Computers in Simulation 15 (November 27, 2021): 84–88. http://dx.doi.org/10.46300/9102.2021.15.15.

Full text
Abstract:
An important part of any computed tomography (CT) system is the reconstruction method, which transforms the measured data into images. Reconstruction methods for CT can be either analytical or iterative. The analytical methods can be exact, by exact projector inversion, or non-exact based on Back projection (BP). The BP methods are attractive because of thier simplicity and low computational cost. But they produce suboptimal images with respect to artifacts, resolution, and noise. This paper deals with improve of the image quality of BP by using super-resolution technique. Super-resolution can be beneficial in improving the image quality of many medical imaging systems without the need for significant hardware alternation. In this paper, we propose to reconstruct a high-resolution image from the measured signals in Sinogram space instead of reconstructing low-resolution images and then post-process these images to get higher resolution image.
APA, Harvard, Vancouver, ISO, and other styles
24

Graafen, Dirk, Moritz C. Halfmann, Tilman Emrich, Yang Yang, Michael Kreuter, Christoph Düber, Roman Kloeckner, Lukas Müller, and Tobias Jorg. "Optimization of the Reconstruction Settings for Low-Dose Ultra-High-Resolution Photon-Counting Detector CT of the Lungs." Diagnostics 13, no. 23 (November 24, 2023): 3522. http://dx.doi.org/10.3390/diagnostics13233522.

Full text
Abstract:
Photon-counting detector computed tomography (PCD-CT) yields improved spatial resolution. The combined use of PCD-CT and a modern iterative reconstruction method, known as quantum iterative reconstruction (QIR), has the potential to significantly improve the quality of lung CT images. In this study, we aimed to analyze the impacts of different slice thicknesses and QIR levels on low-dose ultra-high-resolution (UHR) PCD-CT imaging of the lungs. Our study included 51 patients with different lung diseases who underwent unenhanced UHR-PCD-CT scans. Images were reconstructed using three different slice thicknesses (0.2, 0.4, and 1.0 mm) and three QIR levels (2–4). Noise levels were determined in all reconstructions. Three raters evaluated the delineation of anatomical structures and conspicuity of various pulmonary pathologies in the images compared to the clinical reference reconstruction (1.0 mm, QIR-3). The highest QIR level (QIR-4) yielded the best image quality. Reducing the slice thickness to 0.4 mm improved the delineation and conspicuity of pathologies. The 0.2 mm reconstructions exhibited lower image quality due to high image noise. In conclusion, the optimal reconstruction protocol for low-dose UHR-PCD-CT of the lungs includes a slice thickness of 0.4 mm, with the highest QIR level. This optimized protocol might improve the diagnostic accuracy and confidence of lung imaging.
APA, Harvard, Vancouver, ISO, and other styles
25

Soltani, Sara, Martin S. Andersen, and Per Christian Hansen. "Tomographic image reconstruction using training images." Journal of Computational and Applied Mathematics 313 (March 2017): 243–58. http://dx.doi.org/10.1016/j.cam.2016.09.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zeng, Yujie, Jin Lei, Tianming Feng, Xinyan Qin, Bo Li, Yanqi Wang, Dexin Wang, and Jie Song. "Neural Radiance Fields-Based 3D Reconstruction of Power Transmission Lines Using Progressive Motion Sequence Images." Sensors 23, no. 23 (November 30, 2023): 9537. http://dx.doi.org/10.3390/s23239537.

Full text
Abstract:
To address the fuzzy reconstruction effect on distant objects in unbounded scenes and the difficulty in feature matching caused by the thin structure of power lines in images, this paper proposes a novel image-based method for the reconstruction of power transmission lines (PTLs). The dataset used in this paper comprises PTL progressive motion sequence datasets, constructed by a visual acquisition system carried by a developed Flying–walking Power Line Inspection Robot (FPLIR). This system captures close-distance and continuous images of power lines. The study introduces PL-NeRF, that is, an enhanced method based on the Neural Radiance Fields (NeRF) method for reconstructing PTLs. The highlights of PL-NeRF include (1) compressing the unbounded scene of PTLs by exploiting the spatial compression of normal L∞; (2) encoding the direction and position of the sample points through Integrated Position Encoding (IPE) and Hash Encoding (HE), respectively. Compared to existing methods, the proposed method demonstrates good performance in 3D reconstruction, with fidelity indicators of PSNR = 29, SSIM = 0.871, and LPIPS = 0.087. Experimental results highlight that the combination of PL-NeRF with progressive motion sequence images ensures the integrity and continuity of PTLs, improving the efficiency and accuracy of image-based reconstructions. In the future, this method could be widely applied for efficient and accurate 3D reconstruction and inspection of PTLs, providing a strong foundation for automated monitoring of transmission corridors and digital power engineering.
APA, Harvard, Vancouver, ISO, and other styles
27

He, Jing. "Multimedia Vision Improvement and Simulation in Consideration of Virtual Reality Reconstruction Algorithms." Journal of Electrical and Computer Engineering 2022 (May 12, 2022): 1–10. http://dx.doi.org/10.1155/2022/4968588.

Full text
Abstract:
Due to the large noise and many discrete points of the image in the traditional image reconstruction process, the reconstruction quality of the image deviates greatly from the actual target. In this study, the virtual reality reconstruction algorithm is applied to multimedia vision, the virtual reality image is corrected by using the binocular offset positioning method, the denoising process is performed in the image reconstruction process, and the high-pass filter matrix is used to improve the image reproduction. At the same time, the three-dimensional reconstruction algorithm is used to perform correlation retrieval, the ensemble point set and the discrete point set are obtained, the maximum and minimum reconstruction degree areas are clarified, and the deviation reconstruction and peak relocation can be performed. Finally, the experimental test results show that the algorithm in this study can enhance the authenticity of image reconstruction, improve the accuracy of image corner detection, and effectively reduce the noise interference in the process of reconstructing the image.
APA, Harvard, Vancouver, ISO, and other styles
28

Medeiros, Lia, Dimitrios Psaltis, Tod R. Lauer, and Feryal Özel. "Principal-component Interferometric Modeling (PRIMO), an Algorithm for EHT Data. I. Reconstructing Images from Simulated EHT Observations." Astrophysical Journal 943, no. 2 (February 1, 2023): 144. http://dx.doi.org/10.3847/1538-4357/acaa9a.

Full text
Abstract:
Abstract The sparse interferometric coverage of the Event Horizon Telescope (EHT) poses a significant challenge for both reconstruction and model fitting of black hole images. PRIMO is a new principal components analysis-based algorithm for image reconstruction that uses the results of high-fidelity general relativistic, magnetohydrodynamic simulations of low-luminosity accretion flows as a training set. This allows the reconstruction of images that are consistent with the interferometric data and that live in the space of images that is spanned by the simulations. PRIMO follows Monte Carlo Markov Chains to fit a linear combination of principal components derived from an ensemble of simulated images to interferometric data. We show that PRIMO can efficiently and accurately reconstruct synthetic EHT data sets for several simulated images, even when the simulation parameters are significantly different from those of the image ensemble that was used to generate the principal components. The resulting reconstructions achieve resolution that is consistent with the performance of the array and do not introduce significant biases in image features such as the diameter of the ring of emission.
APA, Harvard, Vancouver, ISO, and other styles
29

Koo, Seul Ah, Yunsub Jung, Kyoung A. Um, Tae Hoon Kim, Ji Young Kim, and Chul Hwan Park. "Clinical Feasibility of Deep Learning-Based Image Reconstruction on Coronary Computed Tomography Angiography." Journal of Clinical Medicine 12, no. 10 (May 16, 2023): 3501. http://dx.doi.org/10.3390/jcm12103501.

Full text
Abstract:
This study evaluated the feasibility of deep-learning-based image reconstruction (DLIR) on coronary computed tomography angiography (CCTA). By using a 20 cm water phantom, the noise reduction ratio and noise power spectrum were evaluated according to the different reconstruction methods. Then 46 patients who underwent CCTA were retrospectively enrolled. CCTA was performed using the 16 cm coverage axial volume scan technique. All CT images were reconstructed using filtered back projection (FBP); three model-based iterative reconstructions (MBIR) of 40%, 60%, and 80%; and three DLIR algorithms: low (L), medium (M), and high (H). Quantitative and qualitative image qualities of CCTA were compared according to the reconstruction methods. In the phantom study, the noise reduction ratios of MBIR-40%, MBIR-60%, MBIR-80%, DLIR-L, DLIR-M, and DLIR-H were 26.7 ± 0.2%, 39.5 ± 0.5%, 51.7 ± 0.4%, 33.1 ± 0.8%, 43.2 ± 0.8%, and 53.5 ± 0.1%, respectively. The pattern of the noise power spectrum of the DLIR images was more similar to FBP images than MBIR images. In a CCTA study, CCTA yielded a significantly lower noise index with DLIR-H reconstruction than with the other reconstruction methods. DLIR-H showed a higher SNR and CNR than MBIR (p < 0.05). The qualitative image quality of CCTA with DLIR-H was significantly higher than that of MBIR-80% or FBP. The DLIR algorithm was feasible and yielded a better image quality than the FBP or MBIR algorithms on CCTA.
APA, Harvard, Vancouver, ISO, and other styles
30

Ungania, Sara, Francesco Maria Solivetti, Marco D’Arienzo, Francesco Quagliani, Isabella Sperduti, Aldo Morrone, Carlo de Mutiis, Vicente Bruzzaniti, and Antonino Guerrisi. "New-Generation ASiR-V for Dose Reduction While Maintaining Image Quality in CT: A Phantom Study." Applied Sciences 13, no. 9 (May 3, 2023): 5639. http://dx.doi.org/10.3390/app13095639.

Full text
Abstract:
Over the last few decades, the need to reduce and optimize patient medical radiation exposure has prompted the introduction of novel reconstruction algorithms in computed tomography (CT). Against this backdrop, the present study aimed to assess whether reduced radiation dose CT images reconstructed with the new-generation adaptive statistical iterative reconstruction (ASiR-V) maintain the same image quality as that of routine image reconstruction. In addition, the optimization of image quality parameters for the ASiR-V algorithm (e.g., an optimal combination of blending percentage and noise index (NI)) was investigated. An abdominal reference phantom was imaged using the routine clinical protocol (fixed noise index of 18 and 40% ASiR reconstruction). Reduced radiation dose CT scans were performed with varying NI (22, 24, and 30) and using the ASiR-V reconstruction algorithm. Quantitative and qualitative analyses of image noise, contrast, and resolution were performed against NI and reconstruction blending percentages. Our results confirm the ability of the ASiR-V algorithm to provide images of high diagnostic quality while reducing the patient dose. All the parameters were improved in ASiR-V images as compared to ASiR. Both quantitative and qualitative analyses showed that the best agreement was obtained for the images reconstructed using ASiR-V with NI24 and a high percentage of blending (70–100%). This preliminary study results show that ASiR-V allows for a significant reduction in patient dose (about 40%) while maintaining a good overall image quality when appropriate NI (i.e., 24) is used.
APA, Harvard, Vancouver, ISO, and other styles
31

Hou, Jingru, Yujuan Si, and Xiaoqian Yu. "A Novel and Effective Image Super-Resolution Reconstruction Technique via Fast Global and Local Residual Learning Model." Applied Sciences 10, no. 5 (March 8, 2020): 1856. http://dx.doi.org/10.3390/app10051856.

Full text
Abstract:
The principle of image super-resolution reconstruction (SR) is to pass one or more low-resolution (LR) images through information processing technology to obtain the final high-resolution (HR) image. Convolutional neural networks (CNN) have achieved better results than traditional methods in the process of an image super-resolution reconstruction. However, if the number of neural network layers is increased blindly, it will cause a significant increase in the amount of calculation, increase the difficulty of training the network, and cause the loss of image details. Therefore, in this paper, we use a novel and effective image super-resolution reconstruction technique via fast global and local residual learning model (FGRLR). The principle is to directly train a low-resolution small image on a neural network without enlarging it. This will effectively reduce the amount of calculation. In addition, the stacked local residual block (LRB) structure is used for non-linear mapping, which can effectively overcome the problem of image degradation. After extracting features, use 1 × 1 convolution to perform dimensional compression, and expand the dimensions after non-linear mapping, which can reduce the calculation amount of the model. In the reconstruction layer, deconvolution is used to enlarge the image to the required size. This also reduces the number of parameters. We use skip connections to use low-resolution information for reconstructing high-resolution images. Experimental results show that the algorithm can effectively shorten the running time without affecting the quality of image restoration.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, W. J., Zheng Hao Ge, Meng Jiang, and C. L. Li. "The 3D Reconstruction Technology for the Bone SCT Image." Materials Science Forum 626-627 (August 2009): 547–52. http://dx.doi.org/10.4028/www.scientific.net/msf.626-627.547.

Full text
Abstract:
The key technologies about the 3D reconstruction of the bone SCT image for fast biological manufacture is discussed in this paper. Methods for 3D bone reconstruction are studied. Finally, an appropriate method based on bone SCT image is gained. SCT image is preprocessed, binary treated, and contour abstracted by SCT image abstraction system. This method can achieve following aims: processing outline data collection, reversing bone 3D surface and reconstructing 3D bionic model. The example in this work has shown how the 3D model of a bone is obtained through plane reconstructing successfully. Image processing algorithm is concluded, thereby providing necessary information for 3D reconstruction of bone SCT image.
APA, Harvard, Vancouver, ISO, and other styles
33

Wen, Mingyun, Jisun Park, and Kyungeun Cho. "Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks." Remote Sensing 13, no. 21 (October 22, 2021): 4254. http://dx.doi.org/10.3390/rs13214254.

Full text
Abstract:
This study focuses on reconstructing accurate meshes with high-resolution textures from single images. The reconstruction process involves two networks: a mesh-reconstruction network and a texture-reconstruction network. The mesh-reconstruction network estimates a deformation map, which is used to deform a template mesh to the shape of the target object in the input image, and a low-resolution texture. We propose reconstructing a mesh with a high-resolution texture by enhancing the low-resolution texture through use of the super-resolution method. The architecture of the texture-reconstruction network is like that of a generative adversarial network comprising a generator and a discriminator. During the training of the texture-reconstruction network, the discriminator must focus on learning high-quality texture predictions and to ignore the difference between the generated mesh and the actual mesh. To achieve this objective, we used meshes reconstructed using the mesh-reconstruction network and textures generated through inverse rendering to generate pseudo-ground-truth images. We conducted experiments using the 3D-Future dataset, and the results prove that our proposed approach can be used to generate improved three-dimensional (3D) textured meshes compared to existing methods, both quantitatively and qualitatively. Additionally, through our proposed approach, the texture of the output image is significantly improved.
APA, Harvard, Vancouver, ISO, and other styles
34

Ubaidillah, Allam, Annisa Rahma Fauzia, Adi Teguh Purnomo, Nuruddin Nasution, Wahyu Edy Wibowo, and Supriyanto Ardjo Pawiro. "Effect of the Small Field of View and Imaging Parameters to Image Quality and Dose Calculation in Adaptive Radiotherapy." Polish Journal of Medical Physics and Engineering 29, no. 2 (June 1, 2023): 130–42. http://dx.doi.org/10.2478/pjmpe-2023-0014.

Full text
Abstract:
Abstract Background The use of cone beam computed tomography (CBCT) for dose calculation in adaptive radiotherapy has been investigated in many studies. Proper acquisition and reconstruction of preset parameters could improve the accuracy of dose calculation based on CBCT images. This study evaluated the impact of the modified image acquisition and preset reconstruction parameter available in X-Ray Volumetric Imaging (XVI) to improve CBCT image quality and dose calculation accuracy. Materials and methods Calibration curves were generated by scanning the CIRS phantom using CBCT XVI Elekta 5.0.4 and Computed Tomography (CT) Simulator Somatom, which served as CT image reference. Rando and Catphan 503 phantoms were scanned with various acquisition and reconstruction parameters for dose accuracy and image quality tests. The image quality test is uniformity, low contrast visibility, spatial resolution, and geometrical scale test for each image by following the XVI image quality test module. Results Acquisition and reconstruction parameters have an impact on the Hounsfield Unit (HU) value that is used as the HU-Relative Electron Density (RED) calibration curve. The dose difference for all the calibration curves was within 1% and passed the gamma passing rate. Images acquired using 120 kVp, F1 (with Bowtie Filter), and 50 mA (F1-120-50-10) scored the highest Gamma Index (GI) of 98.5%. For the image quality test, it scored 1.20% on the uniformity test, 2.14% on the low contrast visibility test, and 11 lp/cm on the spatial resolution test. However, F1-120-50-10 reconstructed with different reconstructions scored 3.83% and 4 lp/cm in contrast and spatial resolution test, respectively. Conclusion CBCT reconstruction parameters work as a scatter correction. It could improve the dose accuracy and image quality. Nevertheless, without adequate CBCT acquisition protocols, it would produce an image with high uncertainty and cannot be fixed with reconstruction protocols. The F1-120-50-10 protocols generate the highest dose accuracy and image quality.
APA, Harvard, Vancouver, ISO, and other styles
35

Han, Xian-Hua, Yinqiang Zheng, and Yen-Wei Chen. "Hyperspectral Image Reconstruction Using Multi-scale Fusion Learning." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–21. http://dx.doi.org/10.1145/3477396.

Full text
Abstract:
Hyperspectral imaging is a promising imaging modality that simultaneously captures several images for the same scene on narrow spectral bands, and it has made considerable progress in different fields, such as agriculture, astronomy, and surveillance. However, the existing hyperspectral (HS) cameras sacrifice the spatial resolution for providing the detail spectral distribution of the imaged scene, which leads to low-resolution (LR) HS images compared with the common red-green-blue (RGB) images. Generating a high-resolution HS (HR-HS) image via fusing an observed LR-HS image with the corresponding HR-RGB image has been actively studied. Existing methods for this fusing task generally investigate hand-crafted priors to model the inherent structure of the latent HR-HS image, and they employ optimization approaches for solving it. However, proper priors for different scenes can possibly be diverse, and to figure it out for a specific scene is difficult. This study investigates a deep convolutional neural network (DCNN)-based method for automatic prior learning, and it proposes a novel fusion DCNN model with multi-scale spatial and spectral learning for effectively merging an HR-RGB and LR-HS images. Specifically, we construct an U-shape network architecture for gradually reducing the feature sizes of the HR-RGB image (Encoder-side) and increasing the feature sizes of the LR-HS image (Decoder-side), and we fuse the HR spatial structure and the detail spectral attribute in multiple scales for tackling the large resolution difference in spatial domain of the observed HR-RGB and LR-HS images. Then, we employ multi-level cost functions for the proposed multi-scale learning network to alleviate the gradient vanish problem in long-propagation procedure. In addition, for further improving the reconstruction performance of the HR-HS image, we refine the predicted HR-HS image using an alternating back-projection method for minimizing the reconstruction errors of the observed LR-HS and HR-RGB images. Experiments on three benchmark HS image datasets demonstrate the superiority of the proposed method in both quantitative values and visual qualities.
APA, Harvard, Vancouver, ISO, and other styles
36

Yu, Thomas, Tom Hilbert, Gian Franco Piredda, Arun Joseph, Gabriele Bonanno, Salim Zenkhri, Patrick Omoumi, et al. "Validation and Generalizability of Self-Supervised Image Reconstruction Methods for Undersampled MRI." Machine Learning for Biomedical Imaging 1, September 2022 (September 13, 2022): 1–31. http://dx.doi.org/10.59275/j.melba.2022-6g33.

Full text
Abstract:
Deep learning methods have become the state of the art for undersampled MR reconstruction. Particularly for cases where it is infeasible or impossible for ground truth, fully sampled data to be acquired, self-supervised machine learning methods for reconstruction are becoming increasingly used. However potential issues in the validation of such methods, as well as their generalizability, remain underexplored. In this paper, we investigate important aspects of the validation of self-supervised algorithms for reconstruction of undersampled MR images: quantitative evaluation of prospective reconstructions, potential differences between prospective and retrospective reconstructions, suitability of commonly used quantitative metrics, and generalizability. Two self-supervised algorithms based on self-supervised denoising and the deep image prior were investigated. These methods are compared to a least squares fitting and a compressed sensing reconstruction using in-vivo and phantom data. Their generalizability was tested with prospectively under-sampled data from experimental conditions different to the training. We show that prospective reconstructions can exhibit significant distortion relative to retrospective reconstructions/ground truth. Furthermore, pixel-wise quantitative metrics may not capture differences in perceptual quality accurately, in contrast to a perceptual metric. In addition, all methods showed potential for generalization; however, generalizability is more affected by changes in anatomy/contrast than other changes. We further showed that no-reference image metrics correspond well with human rating of image quality for studying generalizability. Finally, we showed that a well-tuned compressed sensing reconstruction and learned denoising perform similarly on all data. The datasets acquired for this paper will be made available online; see <a href='https://www.melba-journal.org/papers/2022:022.html'>https://www.melba-journal.org/papers/2022:022.html</a> for details.
APA, Harvard, Vancouver, ISO, and other styles
37

Webb, E. K., S. Robson, and R. Evans. "QUANTIFYING DEPTH OF FIELD AND SHARPNESS FOR IMAGE-BASED 3D RECONSTRUCTION OF HERITAGE OBJECTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 911–18. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-911-2020.

Full text
Abstract:
Abstract. Image-based 3D reconstruction processing tools assume sharp focus across the entire object being imaged, but depth of field (DOF) can be a limitation when imaging small to medium sized objects resulting in variation in image sharpness with range from the camera. While DOF is well understood in the context of photographic imaging and it is considered with the acquisition for image-based 3D reconstruction, an “acceptable” level of sharpness and associated “circle of confusion” has not yet been quantified for the 3D case. The work described in this paper contributes to the understanding and quantification of acceptable sharpness by providing evidence of the influence of DOF on the 3D reconstruction of small to medium sized museum objects. Spatial frequency analysis using established collections photography imaging guidelines and targets is used to connect input image quality with 3D reconstruction output quality. Combining quantitative spatial frequency analysis with metrics from a series of comparative 3D reconstructions provides insights into the connection between DOF and output model quality. Lab-based quantification of DOF is used to investigate the influence of sharpness on the output 3D reconstruction to better understand the effects of lens aperture, camera to object surface angle, and taking distance. The outcome provides evidence of the role of DOF in image-based 3D reconstruction and it is briefly presented how masks derived from image content and depth maps can be used to remove unsharp image content and optimise structure from motion (SfM) and multiview stereo (MVS) workflows.
APA, Harvard, Vancouver, ISO, and other styles
38

Yaqub, Muhammad, Feng Jinchao, Kaleem Arshid, Shahzad Ahmed, Wenqian Zhang, Muhammad Zubair Nawaz, and Tariq Mahmood. "Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities." Computational and Mathematical Methods in Medicine 2022 (June 16, 2022): 1–18. http://dx.doi.org/10.1155/2022/8750648.

Full text
Abstract:
Image reconstruction in magnetic resonance imaging (MRI) and computed tomography (CT) is a mathematical process that generates images at many different angles around the patient. Image reconstruction has a fundamental impact on image quality. In recent years, the literature has focused on deep learning and its applications in medical imaging, particularly image reconstruction. Due to the performance of deep learning models in a wide variety of vision applications, a considerable amount of work has recently been carried out using image reconstruction in medical images. MRI and CT appear as the ultimate scientifically appropriate imaging mode for identifying and diagnosing different diseases in this ascension age of technology. This study demonstrates a number of deep learning image reconstruction approaches and a comprehensive review of the most widely used different databases. We also give the challenges and promising future directions for medical image reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Rongqing, Sabine Krueger-Ziolek, Alberto Battistel, Stefan J. Rupitsch, and Knut Moeller. "Effect of a Patient-Specific Structural Prior Mask on Electrical Impedance Tomography Image Reconstructions." Sensors 23, no. 9 (May 7, 2023): 4551. http://dx.doi.org/10.3390/s23094551.

Full text
Abstract:
Electrical Impedance Tomography (EIT) is a low-cost imaging method which reconstructs two-dimensional cross-sectional images, visualising the impedance change within the thorax. However, the reconstruction of an EIT image is an ill-posed inverse problem. In addition, blurring, anatomical alignment, and reconstruction artefacts can hinder the interpretation of EIT images. In this contribution, we introduce a patient-specific structural prior mask into the EIT reconstruction process, with the aim of improving image interpretability. Such a prior mask ensures that only conductivity changes within the lung regions are reconstructed. To evaluate the influence of the introduced structural prior mask, we conducted numerical simulations with two scopes in terms of their different ventilation statuses and varying atelectasis scales. Quantitative analysis, including the reconstruction error and figures of merit, was applied in the evaluation procedure. The results show that the morphological structures of the lungs introduced by the mask are preserved in the EIT reconstructions and the reconstruction artefacts are decreased, reducing the reconstruction error by 25.9% and 17.7%, respectively, in the two EIT algorithms included in this contribution. The use of the structural prior mask conclusively improves the interpretability of the EIT images, which could facilitate better diagnosis and decision-making in clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Huihong, and Shiming Li. "Simulation of 3D Image Reconstruction in Rigid body Motion." MATEC Web of Conferences 232 (2018): 02002. http://dx.doi.org/10.1051/matecconf/201823202002.

Full text
Abstract:
3D image reconstruction under rigid body motion is affected by rigid body motion and visual displacement factors, which leads to low quality of 3D image reconstruction and more noise, in order to improve the quality of 3D image reconstruction of rigid body motion. A 3D image reconstruction technique is proposed based on corner detection and edge contour feature extraction in this paper. Region scanning and point scanning are combined to scan rigid body moving object image. The wavelet denoising method is used to reduce the noise of the 3D image. The edge contour feature of the image is extracted. The sparse edge pixel fusion method is used to decompose the feature of the 3D image under the rigid body motion. The irregular triangulation method is used to extract and reconstruct the information features of the rigid body 3D images. The reconstructed feature points are accurately calibrated with the corner detection method to realize the effective reconstruction of the 3D images. The simulation results show that the method has good quality, high SNR of output image and high registration rate of feature points of image reconstruction, and proposed method has good performance of 3D image reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Yunyun, Xuxu Qin, and Boying Wu. "Median Filter Based Compressed Sensing Model with Application to MR Image Reconstruction." Mathematical Problems in Engineering 2018 (September 16, 2018): 1–9. http://dx.doi.org/10.1155/2018/8316194.

Full text
Abstract:
Magnetic resonance imaging (MRI) has become a helpful technique and developed rapidly in clinical medicine and diagnosis. Magnetic resonance (MR) images can display more clearly soft tissue structures and are important for doctors to diagnose diseases. However, the long acquisition and transformation time of MR images may limit their application in clinical diagnosis. Compressed sensing methods have been widely used in faithfully reconstructing MR images and greatly shorten the scanning and transforming time. In this paper we present a compressed sensing model based on median filter for MR image reconstruction. By combining a total variation term, a median filter term, and a data fitting term together, we first propose a minimization problem for image reconstruction. The median filter term makes our method eliminate additional noise from the reconstruction process and obtain much clearer reconstruction results. One key point of the proposed method lies in the fact that both the total variation term and the median filter term are presented in the L1 norm formulation. We then apply the split Bregman technique for fast minimization and give an efficient algorithm. Finally, we apply our method to numbers of MR images and compare it with a related method. Reconstruction results and comparisons demonstrate the accuracy and efficiency of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
42

Göppel, Simon, Jürgen Frikel, and Markus Haltmeier. "Feature Reconstruction from Incomplete Tomographic Data without Detour." Mathematics 10, no. 8 (April 15, 2022): 1318. http://dx.doi.org/10.3390/math10081318.

Full text
Abstract:
In this paper, we consider the problem of feature reconstruction from incomplete X-ray CT data. Such incomplete data problems occur when the number of measured X-rays is restricted either due to limit radiation exposure or due to practical constraints, making the detection of certain rays challenging. Since image reconstruction from incomplete data is a severely ill-posed (unstable) problem, the reconstructed images may suffer from characteristic artefacts or missing features, thus significantly complicating subsequent image processing tasks (e.g., edge detection or segmentation). In this paper, we introduce a framework for the robust reconstruction of convolutional image features directly from CT data without the need of computing a reconstructed image first. Within our framework, we use non-linear variational regularization methods that can be adapted to a variety of feature reconstruction tasks and to several limited data situations. The proposed variational regularization method minimizes an energy functional being the sum of a feature dependent data-fitting term and an additional penalty accounting for specific properties of the features. In our numerical experiments, we consider instances of edge reconstructions from angular under-sampled data and show that our approach is able to reliably reconstruct feature maps in this case.
APA, Harvard, Vancouver, ISO, and other styles
43

Fervers, Philipp, Charlotte Zaeske, Philip Rauen, Andra-Iza Iuga, Jonathan Kottlors, Thorsten Persigehl, Kristina Sonnabend, Kilian Weiss, and Grischa Bratke. "Conventional and Deep-Learning-Based Image Reconstructions of Undersampled k-Space Data of the Lumbar Spine Using Compressed Sensing in MRI: A Comparative Study on 20 Subjects." Diagnostics 13, no. 3 (January 23, 2023): 418. http://dx.doi.org/10.3390/diagnostics13030418.

Full text
Abstract:
Compressed sensing accelerates magnetic resonance imaging (MRI) acquisition by undersampling of the k-space. Yet, excessive undersampling impairs image quality when using conventional reconstruction techniques. Deep-learning-based reconstruction methods might allow for stronger undersampling and thus faster MRI scans without loss of crucial image quality. We compared imaging approaches using parallel imaging (SENSE), a combination of parallel imaging and compressed sensing (COMPRESSED SENSE, CS), and a combination of CS and a deep-learning-based reconstruction (CS AI) on raw k-space data acquired at different undersampling factors. 3D T2-weighted images of the lumbar spine were obtained from 20 volunteers, including a 3D sequence (standard SENSE), as provided by the manufacturer, as well as accelerated 3D sequences (undersampling factors 4.5, 8, and 11) reconstructed with CS and CS AI. Subjective rating was performed using a 5-point Likert scale to evaluate anatomical structures and overall image impression. Objective rating was performed using apparent signal-to-noise and contrast-to-noise ratio (aSNR and aCNR) as well as root mean square error (RMSE) and structural-similarity index (SSIM). The CS AI 4.5 sequence was subjectively rated better than the standard in several categories and deep-learning-based reconstructions were subjectively rated better than conventional reconstructions in several categories for acceleration factors 8 and 11. In the objective rating, only aSNR of the bone showed a significant tendency towards better results of the deep-learning-based reconstructions. We conclude that CS in combination with deep-learning-based image reconstruction allows for stronger undersampling of k-space data without loss of image quality, and thus has potential for further scan time reduction.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Bingchuan, Tianxiang Ma, Peng Zhang, Miao Hua, Wei Liu, Qian He, and Zili Yi. "ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (June 26, 2023): 1269–77. http://dx.doi.org/10.1609/aaai.v37i1.25210.

Full text
Abstract:
The StyleGAN family succeed in high-fidelity image generation and allow for flexible and plausible editing of generated images by manipulating the semantic-rich latent style space. However, projecting a real image into its latent space encounters an inherent trade-off between inversion quality and editability. Existing encoder-based or optimization-based StyleGAN inversion methods attempt to mitigate the trade-off but see limited performance. To fundamentally resolve this problem, we propose a novel two-phase framework by designating two separate networks to tackle editing and reconstruction respectively, instead of balancing the two. Specifically, in Phase I, a W-space-oriented StyleGAN inversion network is trained and used to perform image inversion and edit- ing, which assures the editability but sacrifices reconstruction quality. In Phase II, a carefully designed rectifying network is utilized to rectify the inversion errors and perform ideal reconstruction. Experimental results show that our approach yields near-perfect reconstructions without sacrificing the editability, thus allowing accurate manipulation of real images. Further, we evaluate the performance of our rectifying net- work, and see great generalizability towards unseen manipulation types and out-of-domain images.
APA, Harvard, Vancouver, ISO, and other styles
45

HILD, MICHAEL, and KAZUYUKI NISHIJIMA. "RECONSTRUCTION OF 3D SPACE STRUCTURE WITH A ROTATIONAL IMAGING SYSTEM." International Journal of Image and Graphics 02, no. 02 (April 2002): 269–85. http://dx.doi.org/10.1142/s0219467802000615.

Full text
Abstract:
We examine whether the accuracy of 3D space point reconstruction from image pairs can be improved by using rotational imaging. A single camera acquires image pairs at two positions that are separated by a rotation and extracts feature points from both images. Matches of feature point pairs from both images are computed, and then the 3D space coordinates are reconstructed based on the disparity of the matched points. The system carries out optimal rotations to obtain low-error reconstructions, using a specially designed control algorithm. The camera can be rotated 360 degrees around a vertical space axis and approximately 270 degrees around a horizontal space axis. The characteristics of the system are described based on simulations and it is shown, through experiments with a prototype, that this system has low matching error, and can cope with occlusions and periodic image structure. The system is adequate for reconstruction of static scenes.
APA, Harvard, Vancouver, ISO, and other styles
46

Park, Hyeseung, and Seungchul Park. "Improving Monocular Depth Estimation with Learned Perceptual Image Patch Similarity-Based Image Reconstruction and Left–Right Difference Image Constraints." Electronics 12, no. 17 (September 4, 2023): 3730. http://dx.doi.org/10.3390/electronics12173730.

Full text
Abstract:
This paper introduces a novel approach for self-supervised monocular depth estimation. The model is trained on stereo–image (left–right pair) data and incorporates carefully designed perceptual image quality assessment-based loss functions for image reconstruction and left–right image difference. The fidelity of the reconstructed images, obtained by warping the input images using the predicted disparity maps, significantly influences the accuracy of depth estimation in self-supervised monocular depth networks. The suggested LPIPS (Learned Perceptual Image Patch Similarity)-based evaluation of image reconstruction accurately emulates human perceptual mechanisms to quantify the quality of reconstructed images, serving as an image reconstruction loss. Consequently, it facilitates the gradual convergence of the reconstructed images toward a greater similarity with the target images during the training process. Stereo–image pair often exhibits slight discrepancies in brightness, contrast, color, and camera angle due to factors like lighting conditions and camera calibration inaccuracies. These factors limit the improvement of image reconstruction quality. To address this, the left–right difference image loss is introduced, aimed at aligning the disparities between the actual left–right image pair and the reconstructed left–right image pair. Due to the tendency of distant pixel values to approach zero in the difference images derived from the left and right source images of stereo pairs, this loss progressively steers the distant pixel values of the reconstructed difference images toward a convergence with zero. Hence, the use of this loss has demonstrated its efficacy in mitigating distortions in distant regions while enhancing overall performance. The primary objective of this study is to introduce and validate the effectiveness of LPIPS-based image reconstruction and left–right difference image losses in the context of monocular depth estimation. To this end, the proposed loss functions have been seamlessly integrated into a straightforward single-task stereo–image learning framework, incorporating simple hyperparameters. Notably, our approach achieves superior results compared to other state-of-the-art methods, even those adopting more intricate hybrid data and multi-task learning strategies.
APA, Harvard, Vancouver, ISO, and other styles
47

Njølstad, Tormund, Anselm Schulz, Johannes C. Godt, Helga M. Brøgger, Cathrine K. Johansen, Hilde K. Andersen, and Anne Catrine T. Martinsen. "Improved image quality in abdominal computed tomography reconstructed with a novel Deep Learning Image Reconstruction technique – initial clinical experience." Acta Radiologica Open 10, no. 4 (April 2021): 205846012110083. http://dx.doi.org/10.1177/20584601211008391.

Full text
Abstract:
Background A novel Deep Learning Image Reconstruction (DLIR) technique for computed tomography has recently received clinical approval. Purpose To assess image quality in abdominal computed tomography reconstructed with DLIR, and compare with standardly applied iterative reconstruction. Material and methods Ten abdominal computed tomography scans were reconstructed with iterative reconstruction and DLIR of medium and high strength, with 0.625 mm and 2.5 mm slice thickness. Image quality was assessed using eight visual grading criteria in a side-by-side comparative setting. All series were presented twice to evaluate intraobserver agreement. Reader scores were compared using univariate logistic regression. Image noise and contrast-to-noise ratio were calculated for quantitative analyses. Results For 2.5 mm slice thickness, DLIR images were more frequently perceived as equal or better than iterative reconstruction across all visual grading criteria (for both DLIR of medium and high strength, p < 0.001). Correspondingly, DLIR images were more frequently perceived as better (as opposed to equal or in favor of iterative reconstruction) for visual reproduction of liver parenchyma, intrahepatic vascular structures as well as overall impression of image noise and texture (p < 0.001). This improved image quality was also observed for 0.625 mm slice images reconstructed with DLIR of high strength when directly comparing to traditional iterative reconstruction in 2.5 mm slices. Image noise was significantly lower and contrast-to-noise ratio measurements significantly higher for images reconstructed with DLIR compared to iterative reconstruction (p < 0.01). Conclusions Abdominal computed tomography images reconstructed using a DLIR technique shows improved image quality when compared to standardly applied iterative reconstruction across a variety of clinical image quality criteria.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Liming, Lei Wang, Xu du, and Fanbo Meng. "CAD-Aided 3D Reconstruction of Intelligent Manufacturing Image Based on Time Series." Scientific Programming 2022 (March 11, 2022): 1–11. http://dx.doi.org/10.1155/2022/9022563.

Full text
Abstract:
To improve the three-dimensional (3D) reconstruction effect of intelligent manufacturing image and reduce the reconstruction time, a new CAD-aided 3D reconstruction of intelligent manufacturing image based on time series was proposed. Kinect sensor is used to collect depth image data and convert it into 3D point cloud coordinates. The collected point cloud data are divided into regions, and different point cloud denoising algorithms are used to filter and denoise the divided regions. With the help of CAD, FLANN matching algorithm is used to extract feature points of time-series images and complete image matching. Three-dimensional reconstruction of sparse point cloud and dense point cloud is carried out to complete 3D reconstruction of intelligent manufacturing images. The experimental results show that the image PSNR of this method is always above 52 dB, and the maximum reconstruction time is 4.9 s. The 3D reconstruction effect of intelligent manufacturing image is better, and it has higher practical application value.
APA, Harvard, Vancouver, ISO, and other styles
49

Lium, Ola, Yong Bin Kwon, Antonios Danelakis, and Theoharis Theoharis. "Robust 3D Face Reconstruction Using One/Two Facial Images." Journal of Imaging 7, no. 9 (August 30, 2021): 169. http://dx.doi.org/10.3390/jimaging7090169.

Full text
Abstract:
Being able to robustly reconstruct 3D faces from 2D images is a topic of pivotal importance for a variety of computer vision branches, such as face analysis and face recognition, whose applications are steadily growing. Unlike 2D facial images, 3D facial data are less affected by lighting conditions and pose. Recent advances in the computer vision field have enabled the use of convolutional neural networks (CNNs) for the production of 3D facial reconstructions from 2D facial images. This paper proposes a novel CNN-based method which targets 3D facial reconstruction from two facial images, one in front and one from the side, as are often available to law enforcement agencies (LEAs). The proposed CNN was trained on both synthetic and real facial data. We show that the proposed network was able to predict 3D faces in the MICC Florence dataset with greater accuracy than the current state-of-the-art. Moreover, a scheme for using the proposed network in cases where only one facial image is available is also presented. This is achieved by introducing an additional network whose task is to generate a rotated version of the original image, which in conjunction with the original facial image, make up the image pair used for reconstruction via the previous method.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Xiaoqing, Zhengguo Tan, Nick Scholand, Volkert Roeloffs, and Martin Uecker. "Physics-based reconstruction methods for magnetic resonance imaging." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, no. 2200 (May 10, 2021): 20200196. http://dx.doi.org/10.1098/rsta.2020.0196.

Full text
Abstract:
Conventional magnetic resonance imaging (MRI) is hampered by long scan times and only qualitative image contrasts that prohibit a direct comparison between different systems. To address these limitations, model-based reconstructions explicitly model the physical laws that govern the MRI signal generation. By formulating image reconstruction as an inverse problem, quantitative maps of the underlying physical parameters can then be extracted directly from efficiently acquired k-space signals without intermediate image reconstruction—addressing both shortcomings of conventional MRI at the same time. This review will discuss basic concepts of model-based reconstructions and report on our experience in developing several model-based methods over the last decade using selected examples that are provided complete with data and code. This article is part of the theme issue ‘Synergistic tomographic image reconstruction: part 1’.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography