Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Visual image reconstruction.

Статті в журналах з теми "Visual image reconstruction"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Visual image reconstruction".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Nestor, Adrian, David C. Plaut, and Marlene Behrmann. "Feature-based face representations and image reconstruction from behavioral and neural data." Proceedings of the National Academy of Sciences 113, no. 2 (December 28, 2015): 416–21. http://dx.doi.org/10.1073/pnas.1514551112.

Повний текст джерела
Анотація:
The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bae, Joungeun, and Hoon Yoo. "Image Enhancement for Computational Integral Imaging Reconstruction via Four-Dimensional Image Structure." Sensors 20, no. 17 (August 25, 2020): 4795. http://dx.doi.org/10.3390/s20174795.

Повний текст джерела
Анотація:
This paper describes the image enhancement of a computational integral imaging reconstruction method via reconstructing a four-dimensional (4-D) image structure. A computational reconstruction method for high-resolution three-dimensional (3-D) images is highly required in 3-D applications such as 3-D visualization and 3-D object recognition. To improve the visual quality of reconstructed images, we introduce an adjustable parameter to produce a group of 3-D images from a single elemental image array. The adjustable parameter controls overlapping in back projection with a transformation of cropping and translating elemental images. It turns out that the new parameter is an independent parameter from the reconstruction position to reconstruct a 4-D image structure with four axes of x, y, z, and k. The 4-D image structure of the proposed method provides more visual information than existing methods. Computer simulations and optical experiments are carried out to show the feasibility of the proposed method. The results indicate that our method enhances the image quality of 3-D images by providing a 4-D image structure with the adjustable parameter.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Xia, and Qianqian Hu. "Visual Truth and Image Manipulation: Visual Ethical Anomie and Reconstruction of Digital Photography." SHS Web of Conferences 155 (2023): 03018. http://dx.doi.org/10.1051/shsconf/202315503018.

Повний текст джерела
Анотація:
Digital photography has impacted the visual truth in terms of image operation technology. The paper analyzes the performance of image manipulation in digital photography, and asserts that the operation image’s resolution of visual truth can be described by three aspects: the fact deviation, the reconstruction of the situation, the trust crisis, the filter survival, the second reprint, and the value change. Consequently, there are three visual ethical problems associated with the manipulation of images by technology, the siege of images on people, and distortions of social order caused by the manipulation of images by technology. Thus, it is suggested that visual ethics be reconstructed from three perspectives: reconstructing the dominant role of human beings in the digital era, returning to the original significance of visual images, and constructing multiple visual structures.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Meng, Lu, and Chuanhao Yang. "Dual-Guided Brain Diffusion Model: Natural Image Reconstruction from Human Visual Stimulus fMRI." Bioengineering 10, no. 10 (September 24, 2023): 1117. http://dx.doi.org/10.3390/bioengineering10101117.

Повний текст джерела
Анотація:
The reconstruction of visual stimuli from fMRI signals, which record brain activity, is a challenging task with crucial research value in the fields of neuroscience and machine learning. Previous studies tend to emphasize reconstructing pixel-level features (contours, colors, etc.) or semantic features (object category) of the stimulus image, but typically, these properties are not reconstructed together. In this context, we introduce a novel three-stage visual reconstruction approach called the Dual-guided Brain Diffusion Model (DBDM). Initially, we employ the Very Deep Variational Autoencoder (VDVAE) to reconstruct a coarse image from fMRI data, capturing the underlying details of the original image. Subsequently, the Bootstrapping Language-Image Pre-training (BLIP) model is utilized to provide a semantic annotation for each image. Finally, the image-to-image generation pipeline of the Versatile Diffusion (VD) model is utilized to recover natural images from the fMRI patterns guided by both visual and semantic information. The experimental results demonstrate that DBDM surpasses previous approaches in both qualitative and quantitative comparisons. In particular, the best performance is achieved by DBDM in reconstructing the semantic details of the original image; the Inception, CLIP and SwAV distances are 0.611, 0.225 and 0.405, respectively. This confirms the efficacy of our model and its potential to advance visual decoding research.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kumar, L. Ravi, K. G. S. Venkatesan, and S.Ravichandran. "Cloud-enabled Internet of Things Medical Image Processing Compressed Sensing Reconstruction." International Journal of Scientific Methods in Intelligence Engineering Networks 01, no. 04 (2023): 11–21. http://dx.doi.org/10.58599/ijsmien.2023.1402.

Повний текст джерела
Анотація:
Deep learning compresses medical image processing in IoMT. CS-MRI acquires quickly. It has various medicinal uses due to its advantages. This lowers motion artifacts and contrast washout. Reduces patient pressure and scanning costs. CS-MRI avoids the Nyquist-Shannon sampling barrier. Parallel imagingbased fast MRI uses many coils to reconstruct MRI images with less raw data. Parallel imaging enables rapid MRI. This research developed a deep learning-based method for reconstructing CS-MRI images that bridges the gap between typical non-learning algorithms that employ data from a single image and enormous training datasets. Conventional approaches only reconstruct CS-MRI data from one picture. Reconstructing CS-MRI images. CS-GAN is recommended for CS-MRI reconstruction. For success. Refinement learning stabilizes our C-GAN-based generator, which eliminates aliasing artifacts. This improved newly produced data. Product quality increased. Adversarial and information loss recreated the picture. We should protect the image’s texture and edges. Picture and frequency domain data establish consistency. We want frequency and picture domain information to match. It offers visual domain data. Traditional CS-MRI reconstruction and deep learning were used in our broad comparison research. C-GAN enhances reconstruction while conserving perceptual visual information. MRI image reconstruction takes 5 milliseconds, allowing real-time analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yang, Qi, and Jong Hoon Yang. "Virtual Reconstruction of Visually Conveyed Images under Multimedia Intelligent Sensor Network Node Layout." Journal of Sensors 2022 (February 2, 2022): 1–12. http://dx.doi.org/10.1155/2022/8367387.

Повний текст джерела
Анотація:
In this paper, multimedia intelligent sensing technology is applied to the virtual reconstruction of images to construct or restore images to the communication media for visual communication. This paper proposes image virtual reconstruction theory based on visual communication research, treats image virtual reconstruction content as open data links and customized domain ontology, establishes an interdisciplinary interactive research framework through the technical means of visual communication, solves the problem of data heterogeneity brought by image virtual reconstruction, and finally establishes a three-dimensional visualization research method and principle of visual communication. The research firstly visual communication cuts into the existing conservation principles and proposes the necessity of image virtual reconstruction from the perspective of visual communication; secondly, the thinking mode of digital technology is different from human thinking mode, and the process of calculation ignores the emotional and spiritual values, but the realization of value rationality must be premised on instrumental rationality. This requires a content judgment and self-examination of the technical dimensional model of image virtual reconstruction on top of comprehensive literature and empirical evidence. In response to the research difficulties such as the constructivity of visual communication, the solution of image virtual reconstruction of visual communication is proposed based on the data collection method and literature characteristics. The process of introducing the tools of computer science into humanity research needs to be placed in a continuous critical theory system due to the uncontrollable and subjective nature of visual content, and finally, based on the construction of information models for image virtual reconstruction, the ontology and semantics of information modeling are thoroughly investigated, and the problems related to them, such as interpretation, wholeness, and interactivity, are analyzed and solved one by one. The transparency of image virtual reconstruction is enhanced through the introduction of interactive metadata, and this theoretical system of virtual restoration is put into practice in the Dunhuang digital display design project.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yin, Jing, and Jong Hoon Yang. "Virtual Reconstruction Method of Regional 3D Image Based on Visual Transmission Effect." Complexity 2021 (June 11, 2021): 1–12. http://dx.doi.org/10.1155/2021/5616826.

Повний текст джерела
Анотація:
In order to solve the problems of poor user experience and low human-computer interaction efficiency, this paper designs a 3D image virtual reconstruction system based on visual communication effects. First, the functional framework diagram and hardware structure diagram of the image 3D reconstruction system are given. Then, combined with the basic theory of visual communication design, the characteristics of different elements in the three-dimensional image system and reasonable visual communication forms are analyzed, and design principles are proposed to improve user experience and communication efficiency. After the input image is preprocessed by median filtering, a three-dimensional reconstruction algorithm based on the image sequence is used to perform a three-dimensional reconstruction of the preprocessed image. The performance of the designed system was tested in a comparison form. We optimize the original hardware structure, expand the clock module, and use the chip to improve the data processing efficiency; in the two-dimensional image; we read the main information, through data conversion, display it in three-dimensional form, select the feature area, extract the image feature, calculate the key physical coordinate points, complete the main code compilation, use visual communication technology to feed back the display visual elements to the 3D image, and complete the design of the 3D image virtual reconstruction system. The test results showed that the application of visual communication technology to the virtual reconstruction of 3D images can effectively remove noise and make the edge area of the image clear, which can meet the needs of users compared with the reconstruction results of the original system. Visual C++ and 3DMAX are used as the system design platform, and three-dimensional image visualization and roaming are realized through OpenGL. Experimental results show that the designed system has better reconstruction accuracy and user satisfaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Njølstad, Tormund, Anselm Schulz, Johannes C. Godt, Helga M. Brøgger, Cathrine K. Johansen, Hilde K. Andersen, and Anne Catrine T. Martinsen. "Improved image quality in abdominal computed tomography reconstructed with a novel Deep Learning Image Reconstruction technique – initial clinical experience." Acta Radiologica Open 10, no. 4 (April 2021): 205846012110083. http://dx.doi.org/10.1177/20584601211008391.

Повний текст джерела
Анотація:
Background A novel Deep Learning Image Reconstruction (DLIR) technique for computed tomography has recently received clinical approval. Purpose To assess image quality in abdominal computed tomography reconstructed with DLIR, and compare with standardly applied iterative reconstruction. Material and methods Ten abdominal computed tomography scans were reconstructed with iterative reconstruction and DLIR of medium and high strength, with 0.625 mm and 2.5 mm slice thickness. Image quality was assessed using eight visual grading criteria in a side-by-side comparative setting. All series were presented twice to evaluate intraobserver agreement. Reader scores were compared using univariate logistic regression. Image noise and contrast-to-noise ratio were calculated for quantitative analyses. Results For 2.5 mm slice thickness, DLIR images were more frequently perceived as equal or better than iterative reconstruction across all visual grading criteria (for both DLIR of medium and high strength, p < 0.001). Correspondingly, DLIR images were more frequently perceived as better (as opposed to equal or in favor of iterative reconstruction) for visual reproduction of liver parenchyma, intrahepatic vascular structures as well as overall impression of image noise and texture (p < 0.001). This improved image quality was also observed for 0.625 mm slice images reconstructed with DLIR of high strength when directly comparing to traditional iterative reconstruction in 2.5 mm slices. Image noise was significantly lower and contrast-to-noise ratio measurements significantly higher for images reconstructed with DLIR compared to iterative reconstruction (p < 0.01). Conclusions Abdominal computed tomography images reconstructed using a DLIR technique shows improved image quality when compared to standardly applied iterative reconstruction across a variety of clinical image quality criteria.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Xu, Li, Ling Bai, and Lei Li. "The Effect of 3D Image Virtual Reconstruction Based on Visual Communication." Wireless Communications and Mobile Computing 2022 (January 5, 2022): 1–8. http://dx.doi.org/10.1155/2022/6404493.

Повний текст джерела
Анотація:
Considering the problems of poor effect, long reconstruction time, large mean square error (MSE), low signal-to-noise ratio (SNR), and structural similarity index (SSIM) of traditional methods in three-dimensional (3D) image virtual reconstruction, the effect of 3D image virtual reconstruction based on visual communication is proposed. Using the distribution set of 3D image visual communication feature points, the feature point components of 3D image virtual reconstruction are obtained. By iterating the 3D image visual communication information, the features of 3D image virtual reconstruction in visual communication are decomposed, and the 3D image visual communication model is constructed. Based on the calculation of the difference of 3D image texture feature points, the spatial position relationship of 3D image feature points after virtual reconstruction is calculated to complete the texture mapping of 3D image. The deep texture feature points of 3D image are extracted. According to the description coefficient of 3D image virtual reconstruction in visual communication, the virtual reconstruction results of 3D image are constrained. The virtual reconstruction algorithm of 3D image is designed to realize the virtual reconstruction of 3D image. The results show that when the number of samples is 200, the virtual reconstruction time of this paper method is 2.1 s, and the system running time is 5 s; the SNR of the virtual reconstruction is 35.5 db. The MSE of 3D image virtual reconstruction is 3%, and the SSIM of virtual reconstruction is 1.38%, which shows that this paper method can effectively improve the ability of 3D image virtual reconstruction.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Yuting. "Design of 3D Image Visual Communication System for Automatic Reconstruction of Digital Images." Advances in Multimedia 2022 (July 30, 2022): 1–10. http://dx.doi.org/10.1155/2022/3369386.

Повний текст джерела
Анотація:
In order to improve the visual communication effect of the 3D image visual communication system, this article combines the automatic reconstruction technology of digital images to design the 3D image visual communication system. Moreover, this article studies the shock stability of the shock capture scheme by combining entropy generation analysis, linear disturbance analysis, and numerical experiments. In addition to this, this article presents a general method that can be used to suppress numerical shock instabilities in various Godunov-type schemes. It can be seen from the experimental results that the proposed 3D image visual communication system for the automatic reconstruction of digital images has a good visual communication effect.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Xiao, Di, Yue Li, and Min Li. "Invertible Privacy-Preserving Adversarial Reconstruction for Image Compressed Sensing." Sensors 23, no. 7 (March 29, 2023): 3575. http://dx.doi.org/10.3390/s23073575.

Повний текст джерела
Анотація:
Since the advent of compressed sensing (CS), many reconstruction algorithms have been proposed, most of which are devoted to reconstructing images with better visual quality. However, higher-quality images tend to reveal more sensitive information in machine recognition tasks. In this paper, we propose a novel invertible privacy-preserving adversarial reconstruction method for image CS. While optimizing the quality, the reconstructed images are made to be adversarial samples at the moment of generation. For semi-authorized users, they can only obtain the adversarial reconstructed images, which provide little information for machine recognition or training deep models. For authorized users, they can reverse adversarial reconstructed images to clean samples with an additional restoration network. Experimental results show that while keeping good visual quality for both types of reconstructed images, the proposed scheme can provide semi-authorized users with adversarial reconstructed images with a very low recognizable rate, and allow authorized users to further recover sanitized reconstructed images with recognition performance approximating that of the traditional CS.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wang, Jingjing, Fangyan Dong, Takashi Takegami, Eiroku Go, and Kaoru Hirota. "A 3D Pseudo-Reconstruction from Single ImageBased on Vanishing Point." Journal of Advanced Computational Intelligence and Intelligent Informatics 13, no. 4 (July 20, 2009): 393–99. http://dx.doi.org/10.20965/jaciii.2009.p0393.

Повний текст джерела
Анотація:
A 3D (three-dimensional) pseudo-reconstruction method from a single image is presented as a novel approach reconstructing a 3D model with no prior internal knowledge of outdoors image. In the proposed method, an image is represented as a collection of sky layer, ground layer, and object layer. A visual radical coordinate system with vanishing point is established to accommodate the extracted 3D data from images. Learning method is done via the layers database. The experiment results show that the visually acceptable 3D model can be extracted less than one minute. That means a higher resolution in much shorter time, compared to conventional methods. This method can be applied to computer games, industrial measurement, archeology, architecture and visual realities.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wang, Rongfang, Yali Qin, Zhenbiao Wang, and Huan Zheng. "Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization." Electronics 11, no. 2 (January 7, 2022): 182. http://dx.doi.org/10.3390/electronics11020182.

Повний текст джерела
Анотація:
Achieving high-quality reconstructions of images is the focus of research in image compressed sensing. Group sparse representation improves the quality of reconstructed images by exploiting the non-local similarity of images; however, block-matching and dictionary learning in the image group construction process leads to a long reconstruction time and artifacts in the reconstructed images. To solve the above problems, a joint regularized image reconstruction model based on group sparse representation (GSR-JR) is proposed. A group sparse coefficients regularization term ensures the sparsity of the group coefficients and reduces the complexity of the model. The group sparse residual regularization term introduces the prior information of the image to improve the quality of the reconstructed image. The alternating direction multiplier method and iterative thresholding algorithm are applied to solve the optimization problem. Simulation experiments confirm that the optimized GSR-JR model is superior to other advanced image reconstruction models in reconstructed image quality and visual effects. When the sensing rate is 0.1, compared to the group sparse residual constraint with a nonlocal prior (GSRC-NLR) model, the gain of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) is up to 4.86 dB and 0.1189, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Mao, Wenli, and Bingyu Zhang. "Research on the Application of Visual Sensing Technology in Art Education." Journal of Sensors 2021 (November 12, 2021): 1–10. http://dx.doi.org/10.1155/2021/2406351.

Повний текст джерела
Анотація:
It is essential to have a new understanding of the development of visual sensing technology in digital image art at this stage, in order to make traditional art education have new professional ability teaching. Based on the current research results in related fields, a three-dimensional (3D) image visual communication system based on digital image automatic reconstruction is proposed with two schemes as the premise. In scheme 1, the hardware part is divided into two modules. The hardware used by the analysis of the 3D image layer module is the HUJ-23 3D image processor. The acquisition of a 3D image layer module uses the hardware of a realistic infrared camera. The software of the system consists of two parts: a 3D image computer expression module and a 3D image reconstruction module. A simulation platform is established. The test data of 3D image reconstruction accuracy and visual communication integrity of the designed system show that both of them show a good trend. In scheme 2, regarding digital image processing, the 3D image visual perception reconstruction is affected by the modeling conditions, and some images are incomplete and damaged. The depth camera and image processor that can be used in the visual communication technology are selected, and their internal parameters are modified to borrow them in the original system hardware. Gaussian filtering model combined with scale-invariant feature transform (SIFT) feature point extraction algorithm is adopted to select image feature points. Previous system reconstruction technology is used to upgrade the 3D digital image, and the feature point detection equation is adopted to detect the accuracy of the upgraded results. Based on the above hardware and software research, the 3D digital image system based on visual communication is successfully upgraded. The test platform is established, and the test samples are selected. Unlike the previous systems, the 3D image reconstruction accuracy of the designed visual communication system can be as high as 98%; the upgraded system has better image integrity and stronger performance than the previous systems and achieves higher visual sensing technology. In art education, it can provide a new content perspective for digital image art teaching.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Cagáň, Jan, Jaroslav Pelant, Martin Kyncl, Martin Kadlec, and Lenka Michalcová. "Damage detection in carbon fiber–reinforced polymer composite via electrical resistance tomography with Gaussian anisotropic regularization." Structural Health Monitoring 18, no. 5-6 (December 29, 2018): 1698–710. http://dx.doi.org/10.1177/1475921718820013.

Повний текст джерела
Анотація:
Electrical resistance tomography is a method for sensing the spatial distribution of electrical conductivity. Therefore, this type of tomography is suitable for sensing damages, which affect electrical conductivity. The utilization of resistance tomography for the structural health monitoring of carbon fiber–reinforced polymer composites is questionable owing to its low spatial resolution and the strong anisotropy of carbon fiber–reinforced polymer composites. This article deals with the employment of resistance tomography with regularization based on a Gaussian anisotropic smoothing filter for the detection of cuts. The advantages of the filter are shown through the image reconstruction of rectangular composite specimens with three different laminate stacking sequences. The cuts are implemented by a milled groove. Visual comparison of the images shows a substantial improvement in the shape reconstruction ability. In addition to visual comparison, the image reconstructions are assessed in terms of the reconstruction error and cross-correlation.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Liu, Ya-Li, Tao Liu, Bin Yan, Jeng-Shyang Pan, and Hong-Mei Yang. "Visual Cryptography Using Computation-Free Bit-Plane Reconstruction." Security and Communication Networks 2022 (June 26, 2022): 1–14. http://dx.doi.org/10.1155/2022/4617885.

Повний текст джерела
Анотація:
Visual cryptography (VC) using bit-plane decomposition improves the quality of the reconstructed image. The disadvantage of this scheme is that the decoder needs computation in order to reconstruct the secret image from its bit-planes. To solve this problem, we propose a no-computation bit-plane decomposition visual cryptography (NC-BPDVC). In NC-BPDVC, we convert the grayscale secret image into a multitone image by multilevel halftoning. Then, by exploring the difference between a digital pixel and a printed dot, we design different dot patterns to render a digital pixel. By doing so, we abandon the usual assumption that DPI (dots per inch) equals PPI (pixels per inch) during printing. By adopting the more realistic assumption that DPI can be larger than PPI as is supported by most printers, we use different patterns to render different tone levels. These patterns are carefully designed so that no computation is needed when one needs to reconstruct the multitone image from its bit-planes. Our algorithm is tested on a batch of twenty standard grayscale images. The experimental results confirm the correctness and advantages of the proposed scheme. Compared with the ordinary bit-plane decomposition VC, NC-BPDVC does not need computation. The security of the proposed algorithm is also analyzed and verified.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Cheng, Rongjie. "Computer Graphic Image Design and Visual Communication Design in the Internet of Things Scenario." Security and Communication Networks 2022 (May 23, 2022): 1–10. http://dx.doi.org/10.1155/2022/8460433.

Повний текст джерела
Анотація:
For the problems of low-illumination reconstructed images with low resolution and long reconstruction time, an image reconstruction system based on the wavelet domain chunking compressed perception algorithm is proposed. A low-illumination image sampling model is established, and the wavelet domain chunking compression perception and information fusion processing are performed using the depth-of-field adaptive adjustment method of the image. The multiscale Retinex algorithm is used for wavelet domain chunking compressed perception and information extraction, and the information entropy feature quantity of the image is extracted. The IoT-based artificial intelligence image detection system has been formed, which effectively enhances the accuracy and timeliness of the image detection system and makes up for the deficiencies of the traditional image detection system. Simulation results show that the method is used for low-illumination image reconstruction with higher resolution, better edge perception, and shorter reconstruction time, which is more efficient for practical application.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Zhao, Yuqing, Guangyuan Fu, Hongqiao Wang, Shaolei Zhang, and Min Yue. "Generative Adversarial Network-Based Edge-Preserving Superresolution Reconstruction of Infrared Images." International Journal of Digital Multimedia Broadcasting 2021 (July 21, 2021): 1–12. http://dx.doi.org/10.1155/2021/5519508.

Повний текст джерела
Анотація:
The convolutional neural network has achieved good results in the superresolution reconstruction of single-frame images. However, due to the shortcomings of infrared images such as lack of details, poor contrast, and blurred edges, superresolution reconstruction of infrared images that preserves the edge structure and better visual quality is still challenging. Aiming at the problems of low resolution and unclear edges of infrared images, this work proposes a two-stage generative adversarial network model to reconstruct realistic superresolution images from four times downsampled infrared images. In the first stage of the generative adversarial network, it focuses on recovering the overall contour information of the image to obtain clear image edges; the second stage of the generative adversarial network focuses on recovering the detailed feature information of the image and has a stronger ability to express details. The infrared image superresolution reconstruction method proposed in this work has highly realistic visual effects and good objective quality evaluation results.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Gillmann, Christina, Pablo Arbelaez, Jose Hernandez, Hans Hagen, and Thomas Wischgoll. "An Uncertainty-Aware Visual System for Image Pre-Processing." Journal of Imaging 4, no. 9 (September 10, 2018): 109. http://dx.doi.org/10.3390/jimaging4090109.

Повний текст джерела
Анотація:
Due to image reconstruction process of all image capturing methods, image data is inherently affected by uncertainty. This is caused by the underlying image reconstruction model, that is not capable to map all physical properties in its entirety. In order to be aware of these effects, image uncertainty needs to be quantified and propagated along the entire image processing pipeline. In classical image processing methodologies, pre-processing algorithms do not consider this information. Therefore, this paper presents an uncertainty-aware image pre-processing paradigm, that is aware of the input image’s uncertainty and propagates it trough the entire pipeline. To accomplish this, we utilize rules for transformation and propagation of uncertainty to incorporate this additional information with a variety of operations. Resulting from this, we are able to adapt prominent image pre-processing algorithms such that they consider the input images uncertainty. Furthermore, we allow the composition of arbitrary image pre-processing pipelines and visually encode the accumulated uncertainty throughout this pipeline. The effectiveness of the demonstrated approach is shown by creating image pre-processing pipelines for a variety of real world datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Yun, Bai. "Design and Reconstruction of Visual Art Based on Virtual Reality." Security and Communication Networks 2021 (August 19, 2021): 1–9. http://dx.doi.org/10.1155/2021/1014017.

Повний текст джерела
Анотація:
Because traditional methods generally lack the image preprocessing link, the effect of visual image detail processing is not good. In order to enhance the image visual effect, a visual art design method based on virtual reality is proposed. Wavelet transform method is used to denoise the visual image and the noise signal in the image is removed; a binary model of fuzzy space vision fusion is established, the space of the visual image is planned, and the spatial distribution information of the visual image is obtained. According to the principle of light and shadow phenomenon in visual image rendering, the Extend Shadow map algorithm is used to render the visual image. Virtual reality technology was used to reconstruct the preprocessed visual image, and the ant colony algorithm was used to optimize the visual image to realize the visual image design. The results show that the peak signal-to-noise ratio of the visual image processed by the proposed method is high, and the image detail processing effect is better.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Shao, Wenze, Haisong Deng, and Zhuihui Wei. "Nonconvex Compressed Sampling of Natural Images and Applications to Compressed MR Imaging." ISRN Computational Mathematics 2012 (November 16, 2012): 1–12. http://dx.doi.org/10.5402/2012/982792.

Повний текст джерела
Анотація:
There have been proposed several compressed imaging reconstruction algorithms for natural and MR images. In essence, however, most of them aim at the good reconstruction of edges in the images. In this paper, a nonconvex compressed sampling approach is proposed for structure-preserving image reconstruction, through imposing sparseness regularization on strong edges and also oscillating textures in images. The proposed approach can yield high-quality reconstruction as images are sampled at sampling ratios far below the Nyquist rate, due to the exploitation of a kind of approximate l0 seminorms. Numerous experiments are performed on the natural images and MR images. Compared with several existing algorithms, the proposed approach is more efficient and robust, not only yielding higher signal to noise ratios but also reconstructing images of better visual effects.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Jiang, Lingyun, Kai Qiao, Linyuan Wang, Chi Zhang, Jian Chen, Lei Zeng, Haibing Bu, and Bin Yan. "Siamese Reconstruction Network: Accurate Image Reconstruction from Human Brain Activity by Learning to Compare." Applied Sciences 9, no. 22 (November 7, 2019): 4749. http://dx.doi.org/10.3390/app9224749.

Повний текст джерела
Анотація:
Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Li, Chun-mei, Ka-zhong Deng, Jiu-yun Sun, and Hui Wang. "Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction." Journal of Sensors 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/1250538.

Повний текст джерела
Анотація:
The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses theK-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm ofl0minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Zeng, Yujie, Jin Lei, Tianming Feng, Xinyan Qin, Bo Li, Yanqi Wang, Dexin Wang, and Jie Song. "Neural Radiance Fields-Based 3D Reconstruction of Power Transmission Lines Using Progressive Motion Sequence Images." Sensors 23, no. 23 (November 30, 2023): 9537. http://dx.doi.org/10.3390/s23239537.

Повний текст джерела
Анотація:
To address the fuzzy reconstruction effect on distant objects in unbounded scenes and the difficulty in feature matching caused by the thin structure of power lines in images, this paper proposes a novel image-based method for the reconstruction of power transmission lines (PTLs). The dataset used in this paper comprises PTL progressive motion sequence datasets, constructed by a visual acquisition system carried by a developed Flying–walking Power Line Inspection Robot (FPLIR). This system captures close-distance and continuous images of power lines. The study introduces PL-NeRF, that is, an enhanced method based on the Neural Radiance Fields (NeRF) method for reconstructing PTLs. The highlights of PL-NeRF include (1) compressing the unbounded scene of PTLs by exploiting the spatial compression of normal L∞; (2) encoding the direction and position of the sample points through Integrated Position Encoding (IPE) and Hash Encoding (HE), respectively. Compared to existing methods, the proposed method demonstrates good performance in 3D reconstruction, with fidelity indicators of PSNR = 29, SSIM = 0.871, and LPIPS = 0.087. Experimental results highlight that the combination of PL-NeRF with progressive motion sequence images ensures the integrity and continuity of PTLs, improving the efficiency and accuracy of image-based reconstructions. In the future, this method could be widely applied for efficient and accurate 3D reconstruction and inspection of PTLs, providing a strong foundation for automated monitoring of transmission corridors and digital power engineering.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Chen, Huihong, and Shiming Li. "Simulation of 3D Image Reconstruction in Rigid body Motion." MATEC Web of Conferences 232 (2018): 02002. http://dx.doi.org/10.1051/matecconf/201823202002.

Повний текст джерела
Анотація:
3D image reconstruction under rigid body motion is affected by rigid body motion and visual displacement factors, which leads to low quality of 3D image reconstruction and more noise, in order to improve the quality of 3D image reconstruction of rigid body motion. A 3D image reconstruction technique is proposed based on corner detection and edge contour feature extraction in this paper. Region scanning and point scanning are combined to scan rigid body moving object image. The wavelet denoising method is used to reduce the noise of the 3D image. The edge contour feature of the image is extracted. The sparse edge pixel fusion method is used to decompose the feature of the 3D image under the rigid body motion. The irregular triangulation method is used to extract and reconstruct the information features of the rigid body 3D images. The reconstructed feature points are accurately calibrated with the corner detection method to realize the effective reconstruction of the 3D images. The simulation results show that the method has good quality, high SNR of output image and high registration rate of feature points of image reconstruction, and proposed method has good performance of 3D image reconstruction.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Vacher, Jonathan, Claire Launay, Pascal Mamassian, and Ruben Coen-Cagli. "Measuring uncertainty in human visual segmentation." PLOS Computational Biology 19, no. 9 (September 25, 2023): e1011483. http://dx.doi.org/10.1371/journal.pcbi.1011483.

Повний текст джерела
Анотація:
Segmenting visual stimuli into distinct groups of features and visual objects is central to visual function. Classical psychophysical methods have helped uncover many rules of human perceptual segmentation, and recent progress in machine learning has produced successful algorithms. Yet, the computational logic of human segmentation remains unclear, partially because we lack well-controlled paradigms to measure perceptual segmentation maps and compare models quantitatively. Here we propose a new, integrated approach: given an image, we measure multiple pixel-based same–different judgments and perform model–based reconstruction of the underlying segmentation map. The reconstruction is robust to several experimental manipulations and captures the variability of individual participants. We demonstrate the validity of the approach on human segmentation of natural images and composite textures. We show that image uncertainty affects measured human variability, and it influences how participants weigh different visual features. Because any putative segmentation algorithm can be inserted to perform the reconstruction, our paradigm affords quantitative tests of theories of perception as well as new benchmarks for segmentation algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Basha, Shaik Mahaboob, and B. C. Jinaga. "A Novel Optimized Golomb-Rice Technique for the Reconstruction in Lossless Compression of Digital Images." ISRN Signal Processing 2013 (August 7, 2013): 1–5. http://dx.doi.org/10.1155/2013/539759.

Повний текст джерела
Анотація:
The research trends that are available in the area of image compression for various imaging applications are not adequate for some of the applications. These applications require good visual quality in processing. In general the tradeoff between compression efficiency and picture quality is the most important parameter to validate the work. The existing algorithms for still image compression were developed by considering the compression efficiency parameter by giving least importance to the visual quality in processing. Hence, we proposed a novel lossless image compression algorithm based on Golomb-Rice coding which was efficiently suited for various types of digital images. Thus, in this work, we specifically address the following problem that is to maintain the compression ratio for better visual quality in the reconstruction and considerable gain in the values of peak signal-to-noise ratios (PSNR). We considered medical images, satellite extracted images, and natural images for the inspection and proposed a novel technique to increase the visual quality of the reconstructed image.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Li, Jiefei, Yuqi Zhang, Le He, and Huancong Zuo. "Application of Multimodal Image Fusion Technology in Brain Tumor Surgical Procedure." Translational Neuroscience and Clinics 2, no. 4 (December 2016): 215–26. http://dx.doi.org/10.18679/cn11-6030_r.2016.035.

Повний текст джерела
Анотація:
Objective To construct brain tumors and their surrounding anatomical structures through the method of registration, fusion and, three-dimensional (3D) reconstruction based on multimodal image data and to provide the visual information of tumor, skull, brain, and vessels for preoperative evaluation, surgical planning, and function protection. Methods The image data of computed tomography (CT) and magnetic resonance imaging (MRI) were collected from fifteen patients with confirmed brain tumors. We reconstructed brain tumors and their surrounding anatomical structures using NeuroTech software. Results The whole 3D structures including tumor, brain surface, skull, and vessels were successfully reconstructed based on the CT and MRI images. Reconstruction image clearly shows the tumor size, location, shape, and the anatomical relationship of tumor and surrounding structures. We can hide any reconstructed images such as skull, brain tissue, blood vessles, or tumors. We also can adjust the color of reconstructed images and rotate images to observe the structures from any direction. Reconstruction of brain and skull can be semi transparent to display the deep structure; reconstruction of the structures can be axial, coronal, and sagittal cutting to show relationship among tumor and surrounding structures. The reconstructed 3D structures clearly depicted the tumor features, such as size, location, and shape, and provided visual information of the spatial relationship among its surrounding structures. Conclusions The method of registration, fusion, and 3D reconstruction based on multimodal images to provide the visual information is feasible and practical. The reconstructed 3D structures are useful for preoperative assessment, incision design, the choice of surgical approach, tumor resection, and functional protection.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Condorelli, Francesca, and Maurizio Perticarini. "Comparative Evaluation of NeRF Algorithms on Single Image Dataset for 3D Reconstruction." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2-2024 (June 11, 2024): 73–79. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-2024-73-2024.

Повний текст джерела
Анотація:
Abstract. The reconstruction of three-dimensional scenes from a single image represents a significant challenge in computer vision, particularly in the context of cultural heritage digitisation, where datasets may be limited or of poor quality. This paper addresses this challenge by conducting a study of the latest and most advanced algorithms for single-image 3D reconstruction, with a focus on applications in cultural heritage conservation. Exploiting different single-image datasets, the research evaluates the strengths and limitations of various artificial intelligence-based algorithms, in particular Neural Radiance Fields (NeRF), in reconstructing detailed 3D models from limited visual data. The study includes experiments on scenarios such as inaccessible or non-existent heritage sites, where traditional photogrammetric methods fail. The results demonstrate the effectiveness of NeRF-based approaches in producing accurate, high-resolution reconstructions suitable for visualisation and metric analysis. The results contribute to advancing the understanding of NeRF-based approaches in handling single-image inputs and offer insights for real-world applications such as object location and immersive content generation.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Yang, Qiang, and Huajun Wang. "Super-resolution reconstruction for a single image based on self-similarity and compressed sensing." Journal of Algorithms & Computational Technology 12, no. 3 (May 31, 2018): 234–44. http://dx.doi.org/10.1177/1748301818778244.

Повний текст джерела
Анотація:
Super-resolution image reconstruction can achieve favorable feature extraction and image analysis. This study first investigated the image’s self-similarity and constructed high-resolution and low-resolution learning dictionaries; then, based on sparse representation and reconstruction algorithm in compressed sensing theory, super-resolution reconstruction (SRSR) of a single image was realized. The proposed algorithm adopted improved K-SVD algorithm for sample training and learning dictionary construction; additionally, the matching pursuit algorithm was improved for achieving single-image SRSR based on image’s self-similarity and compressed sensing. The experimental results reveal that the proposed reconstruction algorithm shows better visual effect and image quality than the degraded low-resolution image; moreover, compared with the reconstructed images using bilinear interpolation and sparse-representation-based algorithms, the reconstructed image using the proposed algorithm has a higher PSNR value and thus exhibits more favorable super-resolution image reconstruction performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Boscain, Ugo, Roman Chertovskih, Jean-Paul Gauthier, Dario Prandi, and Alexey Remizov. "Cortical-inspired image reconstruction via sub-Riemannian geometry and hypoelliptic diffusion." ESAIM: Proceedings and Surveys 64 (2018): 37–53. http://dx.doi.org/10.1051/proc/201864037.

Повний текст джерела
Анотація:
In this paper we review several algorithms for image inpainting based on the hypoelliptic diffusion naturally associated with a mathematical model of the primary visual cortex. In particular, we present one algorithm that does not exploit the information of where the image is corrupted, and others that do it. While the first algorithm is able to reconstruct only images that our visual system is still capable of recognize, we show that those of the second type completely transcend such limitation providing reconstructions at the state-of-the-art in image inpainting. This can be interpreted as a validation of the fact that our visual cortex actually encodes the first type of algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Maitin, Ana M., Alberto Nogales, Emilio Delgado-Martos, Giovanni Intra Sidola, Carlos Pesqueira-Calvo, Gabriel Furnieles, and Álvaro J. García-Tejedor. "Evaluating Activation Functions in GAN Models for Virtual Inpainting: A Path to Architectural Heritage Restoration." Applied Sciences 14, no. 16 (August 6, 2024): 6854. http://dx.doi.org/10.3390/app14166854.

Повний текст джерела
Анотація:
Computer vision has advanced much in recent years. Several tasks, such as image recognition, classification, or image restoration, are regularly solved with applications using artificial intelligence techniques. Image restoration comprises different use cases such as style transferring, improvement of quality resolution, or completing missing parts. The latter is also known as image inpainting, virtual image inpainting in this case, which consists of reconstructing missing regions or elements. This paper explores how to evaluate the performance of a deep learning method to do virtual image inpainting to reconstruct missing architectonical elements in images of ruined Greek temples to measure the performance of different activation functions. Unlike a previous study related to this work, a direct reconstruction process without segmented images was used. Then, two evaluation methods are presented: the objective one (mathematical metrics) and an expert (visual perception) evaluation to measure the performance of the different approaches. Results conclude that ReLU outperforms other activation functions, while Mish and Leaky ReLU perform poorly, and Swish’s professional evaluations highlight a gap between mathematical metrics and human visual perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Shan, Feng, and Youya Wang. "Animation Design Based on 3D Visual Communication Technology." Scientific Programming 2022 (January 5, 2022): 1–11. http://dx.doi.org/10.1155/2022/6461538.

Повний текст джерела
Анотація:
The depth synthesis of image texture is neglected in the current image visual communication technology, which leads to the poor visual effect. Therefore, the design method of film and TV animation based on 3D visual communication technology is proposed. Collect film and television animation videos through 3D visual communication content production, server processing, and client processing. Through stitching, projection mapping, and animation video image frame texture synthesis, 3D vision conveys animation video image projection. In order to ensure the continuous variation of scaling factors between adjacent triangles of animation and video images, the scaling factor field is constructed. Deep learning is used to extract the deep features and to reconstruct the multiframe animated and animated video images based on visual communication. Based on this, the frame feature of video image under gray projection is identified and extracted, and the animation design based on 3D visual communication technology is completed. Experimental results show that the proposed method can enhance the visual transmission of animation video images significantly and can achieve high-precision reconstruction of video images in a short time.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Khaleghi, Nastaran, Tohid Yousefi Rezaii, Soosan Beheshti, Saeed Meshgini, Sobhan Sheykhivand, and Sebelan Danishvar. "Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network." Electronics 11, no. 21 (November 7, 2022): 3637. http://dx.doi.org/10.3390/electronics11213637.

Повний текст джерела
Анотація:
Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Lim, Seung-Chan, and Myungjin Cho. "Three-Dimensional Image Transmission of Integral Imaging through Wireless MIMO Channel." Sensors 23, no. 13 (July 4, 2023): 6154. http://dx.doi.org/10.3390/s23136154.

Повний текст джерела
Анотація:
For the reconstruction of high-resolution 3D digital content in integral imaging, an efficient wireless 3D image transmission system is required to convey a large number of elemental images without a communication bottleneck. To support a high transmission rate, we herein propose a novel wireless three-dimensional (3D) image transmission and reception strategy based on the multiple-input multiple-output (MIMO) technique. By exploiting the spatial multiplexing capability, multiple elemental images are transmitted simultaneously through the wireless MIMO channel, and recovered with a linear receiver such as matched filter, zero forcing, or minimum mean squared error combiners. Using the recovered elemental images, a 3D image can be reconstructed using volumetric computational reconstruction (VCR) with non-uniform shifting pixels. Although the received elemental images are corrupted by the wireless channel and inter-stream interference, the averaging effect of the VCR can improve the visual quality of the reconstructed 3D images. The numerical results validate that the proposed system can achieve excellent 3D reconstruction performance in terms of the visual quality and peak sidelobe ratio though a large number of elemental images are transmitted simultaneously over the wireless MIMO channel.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Wang, Yuhang. "Image 3D Reconstruction and Interaction Based on Digital Twin and Visual Communication Effect." Mobile Information Systems 2022 (July 23, 2022): 1–12. http://dx.doi.org/10.1155/2022/8510369.

Повний текст джерела
Анотація:
With the continuous development of multimedia technology and electronic information technology, the application of 3D images has become more and more extensive, and 3D reconstruction and interactive technology have also received more and more attention from researchers. Digital twin (DT for short) is a technology that makes full use of digital twin models and real-time monitoring data to complete real-time mapping of the real world in the digital world. Visual communication (CV for short) design is a design that is expressed and communicated to the audience through a visual medium. 3D reconstruction refers to the mathematical process and computer technology that uses 2D projection to recover 3D information of an object. This essay aims to study an image 3D reconstruction and interaction method based on DT and CV and analyze the practical feasibility and practical effect of this method. This essay proposes a model for image 3D reconstruction and interaction, which combines DT and CV effects and conducts a system simulation test for the model. The simulation is carried out on the three-dimensional reconstruction of the craniomaxillofacial area in medicine, and the reconstruction process is relatively smooth. And the final simulation test shows that the maximum CPU usage rate of the system during the rebuilding process is about 50%, which is relatively stable. The average CPU usage is about 30%, the overall system energy consumption is low, the lower limit of the overall SNR of the image is 57, and the upper limit is 62. The image quality of the reconstruction process is good, and the overall system reliability is high, which is feasible.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Xiao, Yahui. "Research on Visual Image Texture Rendering for Artistic Aided Design." Scientific Programming 2021 (August 7, 2021): 1–8. http://dx.doi.org/10.1155/2021/1190912.

Повний текст джерела
Анотація:
The rendering effect of known visual image texture is poor and the output image is not always clear. To solve this problem, this paper proposes a visual image rendering based on scene visual understanding algorithm. In this approach, the color segmentation of known visual scene is carried out according to a predefined threshold, and the segmented image is processed by morphology. For this purpose, the extraction rules are formulated to screen the candidate regions. The color image is fused and filtered in the neighborhood, the pixels of the image are extracted, and the 2D texture recognition is realized by multilevel fusion and visual feature reconstruction. Using compact sampling to extract more target features, feature points are matched, the coordinate system of known image information are integrated into a unified coordinate system, and design images are generated to complete art-aided design. Simulation results show that the proposed method is more accurate than the original method for extracting the information of known images, which helps to solve the problem of clearly visible output images and improves the overall design effect.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Han, Xian-Hua, Yinqiang Zheng, and Yen-Wei Chen. "Hyperspectral Image Reconstruction Using Multi-scale Fusion Learning." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–21. http://dx.doi.org/10.1145/3477396.

Повний текст джерела
Анотація:
Hyperspectral imaging is a promising imaging modality that simultaneously captures several images for the same scene on narrow spectral bands, and it has made considerable progress in different fields, such as agriculture, astronomy, and surveillance. However, the existing hyperspectral (HS) cameras sacrifice the spatial resolution for providing the detail spectral distribution of the imaged scene, which leads to low-resolution (LR) HS images compared with the common red-green-blue (RGB) images. Generating a high-resolution HS (HR-HS) image via fusing an observed LR-HS image with the corresponding HR-RGB image has been actively studied. Existing methods for this fusing task generally investigate hand-crafted priors to model the inherent structure of the latent HR-HS image, and they employ optimization approaches for solving it. However, proper priors for different scenes can possibly be diverse, and to figure it out for a specific scene is difficult. This study investigates a deep convolutional neural network (DCNN)-based method for automatic prior learning, and it proposes a novel fusion DCNN model with multi-scale spatial and spectral learning for effectively merging an HR-RGB and LR-HS images. Specifically, we construct an U-shape network architecture for gradually reducing the feature sizes of the HR-RGB image (Encoder-side) and increasing the feature sizes of the LR-HS image (Decoder-side), and we fuse the HR spatial structure and the detail spectral attribute in multiple scales for tackling the large resolution difference in spatial domain of the observed HR-RGB and LR-HS images. Then, we employ multi-level cost functions for the proposed multi-scale learning network to alleviate the gradient vanish problem in long-propagation procedure. In addition, for further improving the reconstruction performance of the HR-HS image, we refine the predicted HR-HS image using an alternating back-projection method for minimizing the reconstruction errors of the observed LR-HS and HR-RGB images. Experiments on three benchmark HS image datasets demonstrate the superiority of the proposed method in both quantitative values and visual qualities.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Fazel-Rezai, Reza, and Witold Kinsner. "Modified Gabor Wavelets for Image Decomposition and Perfect Reconstruction." International Journal of Cognitive Informatics and Natural Intelligence 3, no. 4 (October 2009): 19–33. http://dx.doi.org/10.4018/jcini.2009062302.

Повний текст джерела
Анотація:
This article presents a scheme for image decomposition and perfect reconstruction based on Gabor wavelets. Gabor functions have been used extensively in areas related to the human visual system due to their localization in space and bandlimited properties. However, since the standard two-sided Gabor functions are not orthogonal and lead to nearly singular Gabor matrices, they have been used in the decomposition, feature extraction, and tracking of images rather than in image reconstruction. In an attempt to reduce the singularity of the Gabor matrix and produce reliable image reconstruction, in this article, the authors used single-sided Gabor functions. Their experiments revealed that the modified Gabor functions can accomplish perfect reconstruction.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Tang, Yi, Jin Qiu, and Ming Gao. "Fuzzy Medical Computer Vision Image Restoration and Visual Application." Computational and Mathematical Methods in Medicine 2022 (June 21, 2022): 1–10. http://dx.doi.org/10.1155/2022/6454550.

Повний текст джерела
Анотація:
In order to shorten the image registration time and improve the imaging quality, this paper proposes a fuzzy medical computer vision image information recovery algorithm based on the fuzzy sparse representation algorithm. Firstly, by constructing a computer vision image acquisition model, the visual feature quantity of the fuzzy medical computer vision image is extracted, and the feature registration design of the fuzzy medical computer vision image is carried out by using the 3D visual reconstruction technology. Then, by establishing a multidimensional histogram structure model, the wavelet multidimensional scale feature detection method is used to achieve grayscale feature extraction of fuzzy medical computer vision images. Finally, the fuzzy sparse representation algorithm is used to automatically optimize the fuzzy medical computer vision images. The experimental results show that the proposed method has a short image information registration time, less than 10 ms, and has a high peak PSNR. When the number of pixels is 700, its peak PSNR can reach 83.5 dB, which is suitable for computer image restoration.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lu, Qiuju. "Local Defogging Algorithm for Improving Visual Impact in Image Based on Multiobjective Optimization." Mathematical Problems in Engineering 2022 (July 30, 2022): 1–9. http://dx.doi.org/10.1155/2022/7200657.

Повний текст джерела
Анотація:
The preprocessing of images is required for many applications based on industry, social, and academic requirements. Researchers have developed a number of techniques to improve the visual effect of images and appropriately interpret visual effects. The accuracy of visuals is important in cyber security, military organization, police organizations, and forensics to detect the true story from the pictures. They search for evidence by digging deep into the network in search of evidence. If visuals are not clear, preprocessing of images is not done correctly, then it may lead to wrong interpretations. This paper proposes an image local defogging technique based on multiobjective optimization to improve the visual effect of the image as well as the information entropy. The multiobjective function is selected to establish the image reconstruction model based on multiple objectives. The model is utilized to reconstruct a single image to moderate the impact of noise and other interference factors in the original image. The color constancy model and effective detail intensity model are also devised for image enhancement to get the visual details. The atmospheric light value and transmittance are evaluated using a physical model of atmospheric scattering, and the guided filter is used to maximize the transmittance of a single image and improve the efficiency of image defogging. The dark channel priority method is used to realize the local defogging of a single image and to design the local defogging algorithm. Experiments verify the optimization effect of the proposed algorithm in terms of information entropy and container network interface (CNI) value. The tone restoration degree is good, and it improves the overall image quality. The image defogging effect of the proposed algorithm is verified with respect to subjective and objective levels to check the efficacy of the proposed multiobjective model.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Guo, Mingqiang, Zeyuan Zhang, Heng Liu, and Ying Huang. "NDSRGAN: A Novel Dense Generative Adversarial Network for Real Aerial Imagery Super-Resolution Reconstruction." Remote Sensing 14, no. 7 (March 24, 2022): 1574. http://dx.doi.org/10.3390/rs14071574.

Повний текст джерела
Анотація:
In recent years, more and more researchers have used deep learning methods for super-resolution reconstruction and have made good progress. However, most of the existing super-resolution reconstruction models generate low-resolution images for training by downsampling high-resolution images through bicubic interpolation, and the models trained from these data have poor reconstruction results on real-world low-resolution images. In the field of unmanned aerial vehicle (UAV) aerial photography, the use of existing super-resolution reconstruction models in reconstructing real-world low-resolution aerial images captured by UAVs is prone to producing some artifacts, texture detail distortion and other problems, due to compression and fusion processing of the aerial images, thereby resulting in serious loss of texture detail in the obtained low-resolution aerial images. To address this problem, this paper proposes a novel dense generative adversarial network for real aerial imagery super-resolution reconstruction (NDSRGAN), and we produce image datasets with paired high- and low-resolution real aerial remote sensing images. In the generative network, we use a multilevel dense network to connect the dense connections in a residual dense block. In the discriminative network, we use a matrix mean discriminator that can discriminate the generated images locally, no longer discriminating the whole input image using a single value but instead in chunks of regions. We also use smoothL1 loss instead of the L1 loss used in most existing super-resolution models, to accelerate the model convergence and reach the global optimum faster. Compared with traditional models, our model can better utilise the feature information in the original image and discriminate the image in patches. A series of experiments is conducted with real aerial imagery datasets, and the results show that our model achieves good performance on quantitative metrics and visual perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Liu, L., L. Xu, and J. Peng. "3D RECONSTRUCTION FROM UAV-BASED HYPERSPECTRAL IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 1073–77. http://dx.doi.org/10.5194/isprs-archives-xlii-3-1073-2018.

Повний текст джерела
Анотація:
Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

El Saer, A., C. Stentoumis, I. Kalisperakis, L. Grammatikopoulos, P. Nomikou, and O. Vlasopoulos. "3D RECONSTRUCTION AND MESH OPTIMIZATION OF UNDERWATER SPACES FOR VIRTUAL REALITY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 949–56. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-949-2020.

Повний текст джерела
Анотація:
Abstract. In this contribution, we propose a versatile image-based methodology for 3D reconstructing underwater scenes of high fidelity and integrating them into a virtual reality environment. Typically, underwater images suffer from colour degradation (blueish images) due to the propagation of light through water, which is a more absorbing medium than air, as well as the scattering of light on suspended particles. Other factors, such as artificial lights, also, diminish the quality of images and, thus, the quality of the image-based 3D reconstruction. Moreover, degraded images have a direct impact on the user perception of the virtual environment, due to geometric and visual degenerations. Here, it is argued that these can be mitigated by image pre-processing algorithms and specialized filters. The impact of different filtering techniques on images is evaluated, in order to eliminate colour degradation and mismatches in the image sequences. The methodology in this work consists of five sequential pre-processes; saturation enhancement, haze reduction, and Rayleigh distribution adaptation, to de-haze the images, global histogram matching to minimize differences among images of the dataset, and image sharpening to strengthen the edges of the scene. The 3D reconstruction of the models is based on open-source structure-from-motion software. The models are optimized for virtual reality through mesh simplification, physically based rendering texture maps baking, and level-of-details. The results of the proposed methodology are qualitatively evaluated on image datasets captured in the seabed of Santorini island in Greece, using a ROV platform.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Huo, Jiaofei, and Xiaomo Yu. "Three-dimensional mechanical parts reconstruction technology based on two-dimensional image." International Journal of Advanced Robotic Systems 17, no. 2 (March 1, 2020): 172988142091000. http://dx.doi.org/10.1177/1729881420910008.

Повний текст джерела
Анотація:
With the development of computer technology and three-dimensional reconstruction technology, three-dimensional reconstruction based on visual images has become one of the research hotspots in computer graphics. Three-dimensional reconstruction based on visual image can be divided into three-dimensional reconstruction based on single photo and video. As an indirect three-dimensional modeling technology, this method is widely used in the fields of film and television production, cultural relics restoration, mechanical manufacturing, and medical health. This article studies and designs a stereo vision system based on two-dimensional image modeling technology. The system can be divided into image processing, camera calibration, stereo matching, three-dimensional point reconstruction, and model reconstruction. In the part of image processing, common image processing methods, feature point extraction algorithm, and edge extraction algorithm are studied. On this basis, interactive local corner extraction algorithm and interactive local edge detection algorithm are proposed. It is found that the Harris algorithm can effectively remove the features of less information and easy to generate clustering phenomenon. At the same time, the method of limit constraints is used to match the feature points extracted from the image. This method has high matching accuracy and short time. The experimental research has achieved good matching results. Using the platform of binocular stereo vision system, each step in the process of three-dimensional reconstruction has achieved high accuracy, thus achieving the three-dimensional reconstruction of the target object. Finally, based on the research of three-dimensional reconstruction of mechanical parts and the designed binocular stereo vision system platform, the experimental results of edge detection, camera calibration, stereo matching, and three-dimensional model reconstruction in the process of three-dimensional reconstruction are obtained, and the full text is summarized, analyzed, and prospected.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Sun, Zhikuan, Zheng Li, Mengchuan Sun, and Ziwei Hu. "Improved Image Super-Resolution Using Frequency Channel Attention and Residual Dense Networks." Journal of Physics: Conference Series 2216, no. 1 (March 1, 2022): 012074. http://dx.doi.org/10.1088/1742-6596/2216/1/012074.

Повний текст джерела
Анотація:
Abstract In our real life, due to the various influences, low-resolution images exist widely. The resolution of the image represents the amount of information carried by the image and the quality of the image. Image super-resolution means reconstructing low-resolution images, and it can help us improve the image quality to get more information. This paper attempts to improve the existing super-resolution reconstruction model based on deep learning. We use nested residual dense connection to prompt the model to focus on the recovery of detailed textures and accelerate convergence. Meanwhile, we use frequency channel attention mechanism to weight channels. By comparing the experimental results with other methods including FSRCNN, VDSR and MemNet, our proposed method has achieved better results and visual improvements. Especially on the Urban100 test dataset, the increases of PSNR and SSIM reach higher.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Huang, Yuhui, Shangbo Zhou, Yufen Xu, Yijia Chen, and Kai Cao. "Ref-MEF: Reference-Guided Flexible Gated Image Reconstruction Network for Multi-Exposure Image Fusion." Entropy 26, no. 2 (February 3, 2024): 139. http://dx.doi.org/10.3390/e26020139.

Повний текст джерела
Анотація:
Multi-exposure image fusion (MEF) is a computational approach that amalgamates multiple images, each captured at varying exposure levels, into a singular, high-quality image that faithfully encapsulates the visual information from all the contributing images. Deep learning-based MEF methodologies often confront obstacles due to the inherent inflexibilities of neural network structures, presenting difficulties in dynamically handling an unpredictable amount of exposure inputs. In response to this challenge, we introduce Ref-MEF, a method for color image multi-exposure fusion guided by a reference image designed to deal with an uncertain amount of inputs. We establish a reference-guided exposure correction (REC) module based on channel attention and spatial attention, which can correct input features and enhance pre-extraction features. The exposure-guided feature fusion (EGFF) module combines original image information and uses Gaussian filter weights for feature fusion while keeping the feature dimensions constant. The image reconstruction is completed through a gated context aggregation network (GCAN) and global residual learning GRL. Our refined loss function incorporates gradient fidelity, producing high dynamic range images that are rich in detail and demonstrate superior visual quality. In evaluation metrics focused on image features, our method exhibits significant superiority and leads in holistic assessments as well. It is worth emphasizing that as the number of input images increases, our algorithm exhibits notable computational efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Aurumskjöld, Marie-Louise, Marcus Söderberg, Fredrik Stålhammar, Kristina Vult von Steyern, Anders Tingberg, and Kristina Ydström. "Evaluation of an iterative model-based reconstruction of pediatric abdominal CT with regard to image quality and radiation dose." Acta Radiologica 59, no. 6 (August 20, 2017): 740–47. http://dx.doi.org/10.1177/0284185117728415.

Повний текст джерела
Анотація:
Background In pediatric patients, computed tomography (CT) is important in the medical chain of diagnosing and monitoring various diseases. Because children are more radiosensitive than adults, they require minimal radiation exposure. One way to achieve this goal is to implement new technical solutions, like iterative reconstruction. Purpose To evaluate the potential of a new, iterative, model-based method for reconstructing (IMR) pediatric abdominal CT at a low radiation dose and determine whether it maintains or improves image quality, compared to the current reconstruction method. Material and Methods Forty pediatric patients underwent abdominal CT. Twenty patients were examined with the standard dose settings and 20 patients were examined with a 32% lower radiation dose. Images from the standard examination were reconstructed with a hybrid iterative reconstruction method (iDose4), and images from the low-dose examinations were reconstructed with both iDose4 and IMR. Image quality was evaluated subjectively by three observers, according to modified EU image quality criteria, and evaluated objectively based on the noise observed in liver images. Results Visual grading characteristics analyses showed no difference in image quality between the standard dose examination reconstructed with iDose4 and the low dose examination reconstructed with IMR. IMR showed lower image noise in the liver compared to iDose4 images. Inter- and intra-observer variance was low: the intraclass coefficient was 0.66 (95% confidence interval = 0.60–0.71) for the three observers. Conclusion IMR provided image quality equivalent or superior to the standard iDose4 method for evaluating pediatric abdominal CT, even with a 32% dose reduction.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Zhang, Xiu. "Superresolution Reconstruction of Remote Sensing Image Based on Middle-Level Supervised Convolutional Neural Network." Journal of Sensors 2022 (January 4, 2022): 1–14. http://dx.doi.org/10.1155/2022/2603939.

Повний текст джерела
Анотація:
Image has become one of the important carriers of visual information because of its large amount of information, easy to spread and store, and strong sense of sense. At the same time, the quality of image is also related to the completeness and accuracy of information transmission. This research mainly discusses the superresolution reconstruction of remote sensing images based on the middle layer supervised convolutional neural network. This paper designs a convolutional neural network with middle layer supervision. There are 16 layers in total, and the seventh layer is designed as an intermediate supervision layer. At present, there are many researches on traditional superresolution reconstruction algorithms and convolutional neural networks, but there are few researches that combine the two together. Convolutional neural network can obtain the high-frequency features of the image and strengthen the detailed information; so, it is necessary to study its application in image reconstruction. This article will separately describe the current research status of image superresolution reconstruction and convolutional neural networks. The middle supervision layer defines the error function of the supervision layer, which is used to optimize the error back propagation mechanism of the convolutional neural network to improve the disappearance of the gradient of the deep convolutional neural network. The algorithm training is mainly divided into four stages: the original remote sensing image preprocessing, the remote sensing image temporal feature extraction stage, the remote sensing image spatial feature extraction stage, and the remote sensing image reconstruction output layer. The last layer of the network draws on the single-frame remote sensing image SRCNN algorithm. The output layer overlaps and adds the remote sensing images of the previous layer, averages the overlapped blocks, eliminates the block effect, and finally obtains high-resolution remote sensing images, which is also equivalent to filter operation. In order to allow users to compare the superresolution effect of remote sensing images more clearly, this paper uses the Qt5 interface library to implement the user interface of the remote sensing image superresolution software platform and uses the intermediate layer convolutional neural network and the remote sensing image superresolution reconstruction algorithm proposed in this paper. When the training epoch reaches 35 times, the network has converged. At this time, the loss function converges to 0.017, and the cumulative time is about 8 hours. This research helps to improve the visual effects of remote sensing images.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zhang, Zhongqiang, Dahua Gao, Xuemei Xie, and Guangming Shi. "Dual-Channel Reconstruction Network for Image Compressive Sensing." Sensors 19, no. 11 (June 4, 2019): 2549. http://dx.doi.org/10.3390/s19112549.

Повний текст джерела
Анотація:
The existing compressive sensing (CS) reconstruction algorithms require enormous computation and reconstruction quality that is not satisfying. In this paper, we propose a novel Dual-Channel Reconstruction Network (DC-Net) module to build two CS reconstruction networks: the first one recovers an image from its traditional random under-sampling measurements (RDC-Net); the second one recovers an image from its CS measurements acquired by a fully connected measurement matrix (FDC-Net). Especially, the fully connected under-sampling method makes CS measurements represent original images more effectively. For the two proposed networks, we use a fully connected layer to recover a preliminary reconstructed image, which is a linear mapping from CS measurements to the preliminary reconstructed image. The DC-Net module is used to further improve the preliminary reconstructed image quality. In the DC-Net module, a residual block channel can improve reconstruction quality and dense block channel can expedite calculation, whose fusion can improve the reconstruction performance and reduce runtime simultaneously. Extensive experiments manifest that the two proposed networks outperform state-of-the-art CS reconstruction methods in PSNR and have excellent visual reconstruction effects.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії