To see the other types of publications on this topic, follow the link: Lowlight Restored Image Quality.

Journal articles on the topic 'Lowlight Restored Image Quality'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Lowlight Restored Image Quality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Refhiansyah, Anggie Irfhan, Didi Suhaedi, and Yurika Permanasari. "Penggunaan Topsis untuk Menentukan Exposure Terbaik pada Kamera yang Memiliki Sensor M4/3." Bandung Conference Series: Mathematics 1, no. 1 (December 7, 2021): 1–6. http://dx.doi.org/10.29313/bcsm.v1i1.12.

Full text
Abstract:
Abstract. Cameras that have small sensors such as Micro Four Thirds (M4/3) are experiencing increasing interest in the market today, this is due to the small and compact size of the camera. However, the weakness of cameras with small sensors is that the quality is not as good as those of cameras with larger sensors, especially in lowlight situations because the electronic noise (noise) increases at high ISOs, making it quite difficult to determine the best exposure in these situations. The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used to select the best exposure conditions with the aim of ranking each alternative to be compared, with criteria including shutter speed, aperture and ISO. The result of this research is the total preference value of all criteria. The highest value states that these conditions are ideal for taking pictures in lowlight conditions. Validation is done by looking at the results of the histogram of the image, which shows the quality of the image is good or not. Abstrak. Kamera yang memiliki sensor kecil seperti Micro Four Thirds (M4/3) mengalami peningkatan peminat di pasaran saat ini, hal tersebut di karenakan ukuran kameranya yang kecil dan kompak. Tetapi, kelemahan kamera dengan sensor yang kecil adalah kualitasnya yang tidak sebaik kamera dengan sensor yang lebih besar, terutama pada situasi lowlight karena gangguan elektroniknya (noise) meningkat pada ISO yang tinggi sehingga cukup sulit untuk menentukan exposure terbaik pada situasi tersebut. Metode Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) digunakan untuk memilih kondisi exposure terbaik dengan tujuan melakukan perankingan dari setiap alternatif yang akan dibandingkan, dengan kriterianya antara lain shutter speed, aperture dan ISO. Hasil dari penelitian ini berupa nilai preferensi total dari semua kriteria. Nilai tertinggi menyatakan bahwa kondisi tersebut ideal dalam mengambil gambar pada kondisi lowlight. Validasi dilakukan dengan melihat hasil histogram citra, yang menunjukkan kualitas dari citra tersebut baik atau tidak.
APA, Harvard, Vancouver, ISO, and other styles
2

Nhu. "PARAMETRIC BLIND-DECONVOLUTION METHOD TO REMOVE IMAGE ARTIFACTS IN WAVEFRONT CODING IMAGING SYSTEMS." Journal of Military Science and Technology, no. 72A (May 10, 2021): 62–68. http://dx.doi.org/10.54939/1859-1043.j.mst.72a.2021.62-68.

Full text
Abstract:
Wavefront coding technique includes a phase mask of asymmetric phase mask kind in the pupil plane to extend the depth of field of an imaging system and the digital processing step to obtain the restored final high-quality image. However, the main drawback of wavefront coding technique is image artifacts on the restored final images. In this paper, we proposed a parameter blind-deconvolution method based on maximizing of the variance of the histogram of restored final images that enables to obtain the restored final image with artifact-free over a large range of defocus.
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Guoxin, Xiong Yan, Huizhe Wang, Fei Li, Rui Yang, Jing Xu, Xin Liu, Xiaomao Li, and Xiao Zou. "Color restoration based on digital pathology image." PLOS ONE 18, no. 6 (June 28, 2023): e0287704. http://dx.doi.org/10.1371/journal.pone.0287704.

Full text
Abstract:
Objective Protective color restoration of faded digital pathology images based on color transfer algorithm. Methods Twenty fresh tissue samples of invasive breast cancer from the pathology department of Qingdao Central Hospital in 2021 were screened. After HE staining, HE stained sections were irradiated with sunlight to simulate natural fading, and every 7 days was a fading cycle, and a total of 8 cycles were experienced. At the end of each cycle, the sections were digitally scanned to retain clear images, and the color changes of the sections during the fading process were recorded. The color transfer algorithm was applied to restore the color of the faded images; Adobe Lightroom Classic software presented the histogram of the image color distribution; UNet++ cell recognition segmentation model was used to identify the color restored images; Natural Image Quality Evaluator (NIQE), Information Entropy (Entropy), and Average Gradient (AG) were applied to evaluate the quality of the restored images. Results The restored image color met the diagnostic needs of pathologists. Compared with the faded images, the NIQE value decreased (P<0.05), Entropy value increased (P<0.01), and AG value increased (P<0.01). The cell recognition rate of the restored image was significantly improved. Conclusion The color transfer algorithm can effectively repair faded pathology images, restore the color contrast between nucleus and cytoplasm, improve the image quality, meet the diagnostic needs and improve the cell recognition rate of the deep learning model.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Hua, Yi Kai Shi, Kui Dong Huang, and Qing Chao Yu. "Image Restoration Method Based on Pre-Filtering for Cone-Beam Computed Tomography." Applied Mechanics and Materials 229-231 (November 2012): 1858–61. http://dx.doi.org/10.4028/www.scientific.net/amm.229-231.1858.

Full text
Abstract:
For the problem of image quality degradation of cone-beam Computed Tomography (CBCT) based on flat panel detector (FPD), a constrained least squares iteration (CLSI) restoration method based on pre-filtering is proposed. Firstly, the original projected images are denoised with bilateral filtering algorithm. Then, the denoised projected images are restored with CLSI. Finally, the final restored images are obtained by adding the noise images, which got by subtracting the projected images before and after denoising, to the restored images. The experimental results show that the method well inhibits the noise amplification phenomenon in image restoration, and increases the edge sharpness and contrast-to-noise ratio (CNR) of the projected images and slice images. The CBCT image quality is significantly improved with this method.
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Wang, and Li. "A Fragile Watermark Scheme for Image Recovery Based on Singular Value Decomposition, Edge Detection and Median Filter." Applied Sciences 9, no. 15 (July 26, 2019): 3020. http://dx.doi.org/10.3390/app9153020.

Full text
Abstract:
Many fragile watermark methods have been proposed for image recovery and their performance has been greatly improved. However, jagged edges and confusion still exist in the restored areas and these problems need to be solved to achieve a better visual effect. In this paper, a method for improving recovery quality is proposed that adopts singular value decomposition (SVD) and edge detection for tamper detection and then uses a median filter for image recovery. Variable watermark information can be generated that corresponds to block classifications. With mapping and neighborhood adjustment, the area that has been tampered can be correctly detected. Subsequently, we adopt a filtering operation for the restored image obtained after the inverse watermark embedding process. During the filtering operation, a median filter is used to smooth and remove noise, followed by minimum, maximum and threshold operations to balance the image intensity. Finally, the corresponding pixels of the restored image are replaced with the filtered results. The experimental results of six different tampering attacks conducted on eight test images show that tamper detection method with the edge detection can identify the tampered region correctly but has a higher false alarm rate than other methods. In addition, compared with the other three similar methods previously, using a median filter during image recovery not only improves the visual effect of the restored image but also enhances its quality objectively under most tampering attack conditions.
APA, Harvard, Vancouver, ISO, and other styles
6

Irshad, Muhammad, Camilo Sanchez-Ferreira, Sana Alamgeer, Carlos H. Llanos, and Mylène C. Q. Farias. "No-reference Image Quality Assessment of Underwater Images Using Multi-Scale Salient Local Binary Patterns." Electronic Imaging 2021, no. 9 (January 18, 2021): 265–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-265.

Full text
Abstract:
Images acquired in underwater scenarios may contain severe distortions due to light absorption and scattering, color distortion, poor visibility, and contrast reduction. Because of these degradations, researchers have proposed several algorithms to restore or enhance underwater images. One way to assess these algorithms’ performance is to measure the quality of the restored/enhanced underwater images. Unfortunately, since reference (pristine) images are often not available, designing no-reference (blind) image quality metrics for this type of scenario is still a challenge. In fact, although the area of image quality has evolved a lot in the last decades, estimating the quality of enhanced and restored images is still an open problem. In this work, we present a no-reference image quality evaluation metric for enhanced underwater images (NR-UWIQA) that uses an adapted version of the multi-scale salient local binary pattern operator to extract image features and a machine learning approach to predict quality. The proposed metric was tested on the UID-LEIA database and presented good accuracy performance when compared to other state-of-the-art methods. In summary, the proposed NR-UWQIA method can be used to evaluate the results of restoration techniques quickly and efficiently, opening a new perspective in the area of underwater image restoration and quality assessment.
APA, Harvard, Vancouver, ISO, and other styles
7

Rezgui, Hicham, Messaoud Maouni, Mohammed Lakhdar Hadji, and Ghassen Touil. "Three robust edges stopping functions for image denoising." Boletim da Sociedade Paranaense de Matemática 40 (January 20, 2022): 1–12. http://dx.doi.org/10.5269/bspm.45945.

Full text
Abstract:
In this paper, we present three strong edge stopping functions for image enhancement. These edge stopping functions have the advantage of effectively removing the image noise while preserving the true edges and other important features. The obtained results show an improved quality for the restored images compared to existing restoration models.
APA, Harvard, Vancouver, ISO, and other styles
8

Shemiakina, Julia, Elena Limonova, Natalya Skoryukina, Vladimir V. Arlazarov, and Dmitry P. Nikolaev. "A Method of Image Quality Assessment for Text Recognition on Camera-Captured and Projectively Distorted Documents." Mathematics 9, no. 17 (September 3, 2021): 2155. http://dx.doi.org/10.3390/math9172155.

Full text
Abstract:
In this paper, we consider the problem of identity document recognition in images captured with a mobile device camera. A high level of projective distortion leads to poor quality of the restored text images and, hence, to unreliable recognition results. We propose a novel, theoretically based method for estimating the projective distortion level at a restored image point. On this basis, we suggest a new method of binary quality estimation of projectively restored field images. The method analyzes the projective homography only and does not depend on the image size. The text font and height of an evaluated field are assumed to be predefined in the document template. This information is used to estimate the maximum level of distortion acceptable for recognition. The method was tested on a dataset of synthetically distorted field images. Synthetic images were created based on document template images from the publicly available dataset MIDV-2019. In the experiments, the method shows stable predictive values for different strings of one font and height. When used as a pre-recognition rejection method, it demonstrates a positive predictive value of 86.7% and a negative predictive value of 64.1% on the synthetic dataset. A comparison with other geometric quality assessment methods shows the superiority of our approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Huai Sheng. "A No Interference Optical Image Encryption by a Fresnel Diffraction and a Fourier Transformation." Advanced Materials Research 459 (January 2012): 461–64. http://dx.doi.org/10.4028/www.scientific.net/amr.459.461.

Full text
Abstract:
A no interference optical image encryption is put forward in this paper. The encrypting process is composed of a Fresnel diffraction and a Fourier transformation. A digital image coded with a random phase plate first takes a Fresnel diffraction. The diffraction function is enlarged and coded with another random phase mask. At last the enlarged function undergoes a Fourier transformation. The real part of the transformed function is defined as an encrypted image. In decrypting process, first the encrypted image takes an inverse Fourier transformation. Then the upper left corner of the transformed function is intercepted. According to the space inversion of the transformed function, if the intercepted function takes an inverse Fresnel diffraction, the original digital image can be restored from the final diffraction function. Because there is no interference process in encryption and decryption, the optical system is relatively simple and the quality of restored image is very good
APA, Harvard, Vancouver, ISO, and other styles
10

Wieslander, Håkan, Carolina Wählby, and Ida-Maria Sintorn. "TEM image restoration from fast image streams." PLOS ONE 16, no. 2 (February 1, 2021): e0246336. http://dx.doi.org/10.1371/journal.pone.0246336.

Full text
Abstract:
Microscopy imaging experiments generate vast amounts of data, and there is a high demand for smart acquisition and analysis methods. This is especially true for transmission electron microscopy (TEM) where terabytes of data are produced if imaging a full sample at high resolution, and analysis can take several hours. One way to tackle this issue is to collect a continuous stream of low resolution images whilst moving the sample under the microscope, and thereafter use this data to find the parts of the sample deemed most valuable for high-resolution imaging. However, such image streams are degraded by both motion blur and noise. Building on deep learning based approaches developed for deblurring videos of natural scenes we explore the opportunities and limitations of deblurring and denoising images captured from a fast image stream collected by a TEM microscope. We start from existing neural network architectures and make adjustments of convolution blocks and loss functions to better fit TEM data. We present deblurring results on two real datasets of images of kidney tissue and a calibration grid. Both datasets consist of low quality images from a fast image stream captured by moving the sample under the microscope, and the corresponding high quality images of the same region, captured after stopping the movement at each position to let all motion settle. We also explore the generalizability and overfitting on real and synthetically generated data. The quality of the restored images, evaluated both quantitatively and visually, show that using deep learning for image restoration of TEM live image streams has great potential but also comes with some limitations.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Xifang, Ruxi Xiang, Feng Wu, and Xiaoyan Jiang. "Single image haze removal based on fusion darkness channel prior." Modern Physics Letters B 31, no. 19-21 (July 27, 2017): 1740037. http://dx.doi.org/10.1142/s0217984917400371.

Full text
Abstract:
To improve the image quality and compensate deficiencies of haze removal, we presented a novel fusion method. By analyzing the darkness channel of each method, the effective darkness channel model that takes the correlation information of each darkness channel into account was constructed. This method was used to estimate the transmission map of the input image, and refined by the modified guided filter in order to further improve the image quality. Finally, the radiance image was restored by combining the monochrome atmospheric scattering model. Experimental results show that the proposed method not only effectively remove the haze of the image, but also outperform the other haze removal methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Karam, Ghada Sabah. "Blurred Image Restoration with Unknown Point Spread Function." Al-Mustansiriyah Journal of Science 29, no. 1 (October 31, 2018): 189. http://dx.doi.org/10.23851/mjs.v29i1.335.

Full text
Abstract:
Blurring image caused by a number of factors such as de focus, motion, and limited sensor resolution. Most of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. We proposed adaptive blind- non reference image quality assessment method for estimation the blur function (i.e. point spread function PSF) from the image acquired under low-lighting conditions and defocus images using Bayesian Blind Deconvolution. It is based on predicting a sharp version of a blurry inter image and uses the two images to solve a PSF. The estimation down by trial and error experimentation, until an acceptable restored image quality is obtained. Assessments the qualities of images have done through the applications of a set of quality metrics. Our method is fast and produces accurate results.
APA, Harvard, Vancouver, ISO, and other styles
13

Котов, Денис. "КРИТЕРІАЛЬНІ ПОКАЗНИКИ ЯКОСТІ ОБРОБКИ МУЛЬТИСЕНСОРНОЇ ІНФОРМАЦІЇ В УМОВАХ ДЕСТАБІЛІЗУЮЧИХ ВПЛИВІВ." Collection of scientific works of Odesa Military Academy, no. 18 (March 3, 2023): 119–26. http://dx.doi.org/10.37129/2313-7509.2022.18.119-126.

Full text
Abstract:
The approach to determining the criterion indicators of the quality of digital information processing in the multi-sensor information-controlled system of the car under the influence of destabilizing factors is considered. It was established that the process of processing information that enters the information-controlled system in digital form from various sensors involves the calculation of discrete convolutions of matrices of relevant operators and arrays of initial data. This process includes procedures for forming an array of responses to the observed information flow, an array of responses to the restored (restored) information flow, and forming conditions for restoration (restoration) of the observed information flow. The approach to assessing the quality of digital information processing in terms of determining subjective and objective assessments and methods of obtaining them is analyzed. The basis of these methods is the procedure of direct inversion of the estimated correlation matrix of random realizations of the observed multidimensional information process. Systems with an inverse operator for processing a multidimensional information array are quite sensitive to destabilizing influences: to random perturbations of the coefficients of the recovery (processing) operator and internal noise of the system along with internal system perturbations. The need to take into account destabilizing influences in the process of processing digital information in the car's information-controlled system necessitates the development of a general methodology for studying the influence of destabilizing factors on the quality of image processing. Based on the application of inverse synthesis methods, analytical expressions for criterion quality indicators of digital image processing are proposed, which are not limited to the upper limit of quality assessment and allow analytical evaluation and quantitative determination of the quality level of discrete images restored by direct inversion at the stage of system synthesis. Keywords: car, information-controlled system, image, restoration, restoration operator, criterion indicators, multisensory information, quality of information processing. internal noise, destabilizing factors.
APA, Harvard, Vancouver, ISO, and other styles
14

Liao, Qingtao. "Research on Medical Image Denoising Algorithm Based on Deep Learning Image Quality Evaluation." Journal of Medical Imaging and Health Informatics 11, no. 5 (May 1, 2021): 1384–93. http://dx.doi.org/10.1166/jmihi.2021.3387.

Full text
Abstract:
Improving the clarity of medical images is of great significance for doctors to quickly diagnose and analyze the disease. However, the existing image denoising algorithms largely depend on the size of the data set, the optimization effect of the loss function, and the difficulty in adjusting the parameters. Therefore, a medical image denoising algorithm based on deep learning image quality evaluation is proposed. First, the convolution layer of the convolutional neural network and the output of the first full connection layer are used as the perception features. By stacking the perception loss and pixel loss, and multiplying the perception loss by a certain weight, the low and high level loss fusion of the denoising network is realized, so that the restored image is more in line with human perception. Secondly, by introducing empty convolution into the denoising network, the mixed expanded convolution kernel and the ordinary convolution kernel are used together in the first layer to increase the range of sensing field. Then, the feature extraction and the quality score regression are integrated into the same optimization process. Finally, the direct training reconstructed image is transformed into a training noise filter, which reduces the training difficulty and speeds up the convergence of network parameters. The experimental results show that the PSNR and SSIM of the proposed method are 31.63 db and 89.15%, respectively. Compared with other new image denoising methods, the proposed method can achieve better denoising effect.
APA, Harvard, Vancouver, ISO, and other styles
15

Mansoori, M. A., M. R. Mosavi, and M. H. Bisjerdi. "Regularization-Based Semi-Blind Image Deconvolution Using an Improved Function for PMMW Images Application." Journal of Circuits, Systems and Computers 27, no. 07 (March 26, 2018): 1850107. http://dx.doi.org/10.1142/s0218126618501074.

Full text
Abstract:
Image deconvolution is a method for reversing the distortion in an imaging system. It is widely used in removing blur and noise from a degraded image. This is an ill-posed inverse problem, and one should use regularization techniques to solve this problem. Regularization functions play an important role in finding the desired solution. In this paper, a new function in wavelet domain is proposed, which is useful in blind deconvolution. In addition, a simple algorithm is used to obtain the restored image and unknown point spread function. The proposed approach is tested on three standard images and then compared with the previous methods using standard metrics. Real Passive Millimeter Wave (PMMW) images are also used to obtain the sharp deblurred images. Simulation results show that the proposed method can improve the quality of the restored image.
APA, Harvard, Vancouver, ISO, and other styles
16

Balakin, D. A., A. V. Belinsky, and A. S. Chirkin. "Correlations of Multiplexed Quantum Ghost Images and Improvement of the Quality of Restored Image." Journal of Russian Laser Research 38, no. 2 (March 2017): 164–72. http://dx.doi.org/10.1007/s10946-017-9630-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Raihan A, Jarina, Pg Abas, and Liyanage De Silva. "Role of Restored Underwater Images in Underwater Imaging Applications." Applied System Innovation 4, no. 4 (November 25, 2021): 96. http://dx.doi.org/10.3390/asi4040096.

Full text
Abstract:
Underwater images are extremely sensitive to distortion occurring in an aquatic underwater environment, with absorption, scattering, polarization, diffraction and low natural light penetration representing common problems caused by sea water. Because of these degradation of quality, effectiveness of the acquired images for underwater applications may be limited. An effective method of restoring underwater images has been demonstrated, by considering the wavelengths of red, blue, and green lights, attenuation and backscattering coefficients. The results from the underwater restoration method have been applied to various underwater applications; particularly, edge detection, Speeded Up Robust Feature detection, and image classification that uses machine learning. It has been shown that more edges and more SURF points can be detected as a result of using the method. Applying the method to restore underwater images in image classification tasks on underwater image datasets gives accuracy of up to 89% using a simple machine-learning algorithm. These results are significant as it demonstrates that the restoration method can be implemented on underwater system for various purposes.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Kun Ling. "The Image Restoration Method Based on Patch Sparsity Propagation in Big Data Environment." Journal of Advanced Computational Intelligence and Intelligent Informatics 22, no. 7 (November 20, 2018): 1072–76. http://dx.doi.org/10.20965/jaciii.2018.p1072.

Full text
Abstract:
The traditional image restoration method only uses the original image data as a dictionary to make sparse representation of the pending blocks, which leads to the poor adaptation of the dictionary and the blurred image of the restoration. And only the effective information around the restored block is used for sparse coding, without considering the characteristics of image blocks, and the prior knowledge is limited. Therefore, in the big data environment, a new method of image restoration based on structural coefficient propagation is proposed. The clustering method is used to divide the image into several small area image blocks with similar structures, classify the images according to the features, and train the different feature types of the image blocks and their corresponding adaptive dictionaries. According to the characteristics of the restored image blocks, the restoration order is determined through the sparse structural propagation analysis, and the image restoration is achieved by sparse coding. The design method is programmed, and the image restoration in big data environment is realized by designing the system. Experimental results show that the proposed method can effectively restore images and has high quality and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Zhiyu, Jiayan Zhuang, Sichao Ye, Ningyuan Xu, Jiangjian Xiao, and Chengbin Peng. "Image Restoration Quality Assessment Based on Regional Differential Information Entropy." Entropy 25, no. 1 (January 10, 2023): 144. http://dx.doi.org/10.3390/e25010144.

Full text
Abstract:
With the development of image recovery models, especially those based on adversarial and perceptual losses, the detailed texture portions of images are being recovered more naturally. However, these restored images are similar but not identical in detail texture to their reference images. With traditional image quality assessment methods, results with better subjective perceived quality often score lower in objective scoring. Assessment methods suffer from subjective and objective inconsistencies. This paper proposes a regional differential information entropy (RDIE) method for image quality assessment to address this problem. This approach allows better assessment of similar but not identical textural details and achieves good agreement with perceived quality. Neural networks are used to reshape the process of calculating information entropy, improving the speed and efficiency of the operation. Experiments conducted with this study’s image quality assessment dataset and the PIPAL dataset show that the proposed RDIE method yields a high degree of agreement with people’s average opinion scores compared with other image quality assessment metrics, proving that RDIE can better quantify the perceived quality of images.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Shuai, Huiqin Rong, Chunyuan He, Libo Zhong, and Changhui Rao. "Multiframe Correction Blind Deconvolution for Solar Image Restoration." Publications of the Astronomical Society of the Pacific 134, no. 1036 (June 1, 2022): 064502. http://dx.doi.org/10.1088/1538-3873/ac6445.

Full text
Abstract:
Abstract A series of short-exposure images are often used for rich, small-scale structure, high-quality, and high-resolution astronomical observations. Postprocessing of the closed-loop adaptive optics (AO) image using ground-based astronomical telescopes plays an important role in astronomical observations due to it further improving image quality after AO processing. These images show several main characteristics: random spatial variation blur kernel, unclear model after AO correction, unclear physical characteristics of observation objects, etc. Our goal is to propose a multiframe correction blind deconvolution (MFCBD) algorithm to restore AO closed-loop solar images. MFCBD introduces a denoiser and corrector to help estimate the intermediate latent image and proposes using an L q norm of the kernel as the sparse constraint to acquire a compact blur kernel. MFCBD also uses the half-quadratic splitting strategy to optimize the objective function, which makes the algorithm not only simple to solve, but also easy to adapt to different fidelity terms and prior terms. In tests on three data sets observed from the photosphere and chromosphere of the Sun, MFCBD not only restored clearer and more detailed images, but also converged smoothly and monotonically in terms of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) after a few iterations. Taking the speckle-reconstructed image as a reference, the clear image restored by our method performs best both in PSNR and SSIM compared with the state-of-the-art traditional methods OBD and BATUD.
APA, Harvard, Vancouver, ISO, and other styles
21

Sai, S. V. "A method for assessing photorealistic image quality with high resolution." Computer Optics 46, no. 1 (February 2022): 121–29. http://dx.doi.org/10.18287/2412-6179-co-899.

Full text
Abstract:
The article proposes a method for assessing photorealistic image quality based on a comparison of the detail coefficients in the original and distorted images. An algorithm for identifying fine structures of the original image uses operations of active pixels segmentation, which include point objects, thin lines and texture fragments. The number of active pixels is estimated by the value of a fine detail factor (FDF), which is determined by the ratio of active pixels to the total number of image pixels. The same algorithm is used to calculate the FDF of the distorted image and, further, the image quality deterioration is estimated by comparing the obtained values. Special features of the method include the fact that the identification of small structures and the segmentation of active pixels are performed in the normalized system N-CIELAB. The algorithm also takes into account the influence of false microstructures on the results of the restored image estimating. Features of the construction of neural networks SRCNN in the tasks of a qualitative increase in the image resolution with the restoration of fine structures are considered. Results of the analysis of the quality of enlarged images by the traditional metrics PSNR and SSIM, as well as by the proposed method are also presented.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Jiefei, Yupeng Chen, Tao Li, Jian Lu, and Lixin Shen. "A Residual-Based Kernel Regression Method for Image Denoising." Mathematical Problems in Engineering 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/5245948.

Full text
Abstract:
We propose a residual-based method for denoising images corrupted by Gaussian noise. In the method, by combining bilateral filter and structure adaptive kernel filter together with the use of the image residuals, the noise is suppressed efficiently while the fine features, such as edges, of the images are well preserved. Our experimental results show that, in comparison with several traditional filters and state-of-the-art denoising methods, the proposed method can improve the quality of the restored images significantly.
APA, Harvard, Vancouver, ISO, and other styles
23

Jiao, Qingliang, Ming Liu, Pengyu Li, Liquan Dong, Mei Hui, Lingqin Kong, and Yuejin Zhao. "Underwater Image Restoration via Non-Convex Non-Smooth Variation and Thermal Exchange Optimization." Journal of Marine Science and Engineering 9, no. 6 (May 25, 2021): 570. http://dx.doi.org/10.3390/jmse9060570.

Full text
Abstract:
The quality of underwater images is an important problem for resource detection. However, the light scattering and plankton in water can impact the quality of underwater images. In this paper, a novel underwater image restoration based on non-convex, non-smooth variation and thermal exchange optimization is proposed. Firstly, the underwater dark channel prior is used to estimate the rough transmission map. Secondly, the rough transmission map is refined by the proposed adaptive non-convex non-smooth variation. Then, Thermal Exchange Optimization is applied to compensate for the red channel of underwater images. Finally, the restored image can be estimated via the image formation model. The results show that the proposed algorithm can output high-quality images, according to qualitative and quantitative analysis.
APA, Harvard, Vancouver, ISO, and other styles
24

Bavrina, A. Y., and V. A. Fedoseev. "Semi-fragile watermarking with recovery capabilities for HGI compression method." Computer Optics 46, no. 1 (February 2022): 103–12. http://dx.doi.org/10.18287/2412-6179-co-1021.

Full text
Abstract:
The article proposes a new semi-fragile watermarking system with the ability of tamper localization and recovery after distortions, adapted for the HGI image compression method. The system uses a hierarchical image structure when embedding and replaces the stage of post-interpolation residuals quantization with a special quantizer based on quantization index modulation. As a result, the protected image becomes resistant to HGI compression with an adjustable quality parameter. The proposed watermarking system allows an image to be restored after distortions with an acceptable quality. In this case, the authentication part and the recovery part operate at different hierarchical levels. The developed watermark system, compatible with the HGI compression method, may be used to protect remote sensing images and medical images from malicious distortion.
APA, Harvard, Vancouver, ISO, and other styles
25

Kinge, Sanjaykumar, B. Sheela Rani, and Mukul Sutaone. "Restored texture segmentation using Markov random fields." Mathematical Biosciences and Engineering 20, no. 6 (2023): 10063–89. http://dx.doi.org/10.3934/mbe.2023442.

Full text
Abstract:
<abstract> <p>Texture segmentation plays a crucial role in the domain of image analysis and its recognition. Noise is inextricably linked to images, just like it is with every signal received by sensing, which has an impact on how well the segmentation process performs in general. Recent literature reveals that the research community has started recognizing the domain of noisy texture segmentation for its work towards solutions for the automated quality inspection of objects, decision support for biomedical images, facial expressions identification, retrieving image data from a huge dataset and many others. Motivated by the latest work on noisy textures, during our work being presented here, Brodatz and Prague texture images are contaminated with Gaussian and salt-n-pepper noise. A three-phase approach is developed for the segmentation of textures contaminated by noise. In the first phase, these contaminated images are restored using techniques with excellent performance as per the recent literature. In the remaining two phases, segmentation of the restored textures is carried out by a novel technique developed using Markov Random Fields (MRF) and objective customization of the Median Filter based on segmentation performance metrics. When the proposed approach is evaluated on Brodatz textures, an improvement of up to 16% segmentation accuracy for salt-n-pepper noise with 70% noise density and 15.1% accuracy for Gaussian noise (with a variance of 50) has been made in comparison with the benchmark approaches. On Prague textures, accuracy is improved by 4.08% for Gaussian noise (with variance 10) and by 2.47% for salt-n-pepper noise with 20% noise density. The approach in the present study can be applied to a diversified class of image analysis applications spanning a wide spectrum such as satellite images, medical images, industrial inspection, geo-informatics, etc.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
26

Ruikar, Dr Sachin D., and Ms Vrushali N. Raut. "A Comparison of Filtering Techniques for Image Quality Improvement in Computed Tomography." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 7, no. 3 (June 10, 2013): 670–76. http://dx.doi.org/10.24297/ijct.v7i3.3445.

Full text
Abstract:
Computed Tomography (CT) is an important and most common modality in medical imaging. In CT examinations there is trade off between radiation dose and image quality. If radiation dose is decreased, the noise will unavoidably increase degrading the diagnostic value of the CT image and ifthe radiation dose is increased, the associated risk of cancer also increases especially in paediatric applications. Image filtering techniques perform image pre-processing to improve the quality of images. These techniques serve two major purposes. One is to maintain low radiation dose and another is to make subsequent phases of image analysis like segmentation or recognition easier or more effective. This paper presents the effect of noise reduction filter on CT images particularly that of anisotropic diffusion filter and Gaussian filter in combination with Prewitt operator. Anisotropic diffusion is Selective and nonlinear filtering technique which filters an image within the object boundaries and not across the edge orientation. Simulation results have shown that the anisotropic diffusion filter can effectively smooth noisy background, yet well preserve edge and fine details in the restored image. Gaussian filter smoothens the image while Prewitt operator detects the edges, so the combination of Gaussian filters and Prewitt operator works like a nonlinear filter.Thus these two filtering techniques improve an image quality and allow use of low dose CT protocol
APA, Harvard, Vancouver, ISO, and other styles
27

Yue, Ronggang, Humei Wang, Ting Jin, Yuting Gao, Xiaofeng Sun, Tingfei Yan, Jie Zang, Ke Yin, and Shitao Wang. "Image Motion Measurement and Image Restoration System Based on an Inertial Reference Laser." Sensors 21, no. 10 (May 11, 2021): 3309. http://dx.doi.org/10.3390/s21103309.

Full text
Abstract:
Satellites have many high-, medium-, and low-frequency micro vibration sources that lead to the optical axis jitter of the optical load and subsequently degrade the remote sensing image quality. To address this problem, this paper developed an image motion detection and restoration method based on an inertial reference laser, and describe edits principle and key components. To verify the feasibility and performance of this method, this paper also built an image motion measurement and restoration system based on an inertial reference laser, which comprised a camera (including the inertial reference laser unit and a Hartmann wavefront sensor), an integrating sphere, a simulated image target, a parallel light pope, a vibration isolation platform, a vibration generator, and a 6 degrees of freedom platform. The image restoration principle was also described. The background noise in the experiment environment was measured, and an image motion measurement accuracy experiment was performed. Verification experiments of image restoration were also conducted under various working conditions. The experiment results showed that the error of image motion detection based on the inertial reference laser was less than 0.12 pixels (root mean square). By using image motion data to improve image quality, the modulation transfer function (MTF) of the restored image was increased to 1.61–1.88 times that of the original image MTF. The image motion data could be used as feedback to the fast steering mirror to compensate for the satellite jitter in real time and to directly obtain high-quality images.
APA, Harvard, Vancouver, ISO, and other styles
28

Saleem, Shahid, Shahbaz Ahmad, and Junseok Kim. "Total Fractional-Order Variation-Based Constraint Image Deblurring Problem." Mathematics 11, no. 13 (June 26, 2023): 2869. http://dx.doi.org/10.3390/math11132869.

Full text
Abstract:
When deblurring an image, ensuring that the restored intensities are strictly non-negative is crucial. However, current numerical techniques often fail to consistently produce favorable results, leading to negative intensities that contribute to significant dark regions in the restored images. To address this, our study proposes a mathematical model for non-blind image deblurring based on total fractional-order variational principles. Our proposed model not only guarantees strictly positive intensity values but also imposes limits on the intensities within a specified range. By removing negative intensities or constraining them within the prescribed range, we can significantly enhance the quality of deblurred images. The key concept in this paper involves converting the constrained total fractional-order variational-based image deblurring problem into an unconstrained one through the introduction of the augmented Lagrangian method. To facilitate this conversion and improve convergence, we describe new numerical algorithms and introduce a novel circulant preconditioned matrix. This matrix effectively overcomes the slow convergence typically encountered when using the conjugate gradient method within the augmented Lagrangian framework. Our proposed approach is validated through computational tests, demonstrating its effectiveness and viability in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Ávila, Francisco J., Jorge Ares, María C. Marcellán, María V. Collados, and Laura Remón. "Iterative-Trained Semi-Blind Deconvolution Algorithm to Compensate Straylight in Retinal Images." Journal of Imaging 7, no. 4 (April 16, 2021): 73. http://dx.doi.org/10.3390/jimaging7040073.

Full text
Abstract:
The optical quality of an image depends on both the optical properties of the imaging system and the physical properties of the medium in which the light travels from the object to the final imaging sensor. The analysis of the point spread function of the optical system is an objective way to quantify the image degradation. In retinal imaging, the presence of corneal or cristalline lens opacifications spread the light at wide angular distributions. If the mathematical operator that degrades the image is known, the image can be restored through deconvolution methods. In the particular case of retinal imaging, this operator may be unknown (or partially) due to the presence of cataracts, corneal edema, or vitreous opacification. In those cases, blind deconvolution theory provides useful results to restore important spatial information of the image. In this work, a new semi-blind deconvolution method has been developed by training an iterative process with the Glare Spread Function kernel based on the Richardson-Lucy deconvolution algorithm to compensate a veiling glare effect in retinal images due to intraocular straylight. The method was first tested with simulated retinal images generated from a straylight eye model and applied to a real retinal image dataset composed of healthy subjects and patients with glaucoma and diabetic retinopathy. Results showed the capacity of the algorithm to detect and compensate the veiling glare degradation and improving the image sharpness up to 1000% in the case of healthy subjects and up to 700% in the pathological retinal images. This image quality improvement allows performing image segmentation processing with restored hidden spatial information after deconvolution.
APA, Harvard, Vancouver, ISO, and other styles
30

Gao, Yakun, Haibin Li, and Shuhuan Wen. "Restoration and Enhancement of Underwater Images Based on Bright Channel Prior." Mathematical Problems in Engineering 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/3141478.

Full text
Abstract:
This paper proposed a new method of underwater images restoration and enhancement which was inspired by the dark channel prior in image dehazing field. Firstly, we proposed the bright channel prior of underwater environment. By estimating and rectifying the bright channel image, estimating the atmospheric light, and estimating and refining the transmittance image, eventually underwater images were restored. Secondly, in order to rectify the color distortion, the restoration images were equalized by using the deduced histogram equalization. The experiment results showed that the proposed method could enhance the quality of underwater images effectively.
APA, Harvard, Vancouver, ISO, and other styles
31

Kefer, Paul, Fadil Iqbal, Maelle Locatelli, Josh Lawrimore, Mengdi Zhang, Kerry Bloom, Keith Bonin, Pierre-Alexandre Vidi, and Jing Liu. "Performance of deep learning restoration methods for the extraction of particle dynamics in noisy microscopy image sequences." Molecular Biology of the Cell 32, no. 9 (April 19, 2021): 903–14. http://dx.doi.org/10.1091/mbc.e20-11-0689.

Full text
Abstract:
Deep learning offers revolutionary answers to old challenges, in particular for quantitative microscopy. An example is content-aware image restoration that improves the quality of noisy images. A key question is to what extent biological information is restored. Our work addresses this question using particle tracking as an objective metric.
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Jinah, Dong Huh, Taekyung Kim, Jaeil Kim, Jeseon Yoo, and Jae-Seol Shim. "Raindrop-Aware GAN: Unsupervised Learning for Raindrop-Contaminated Coastal Video Enhancement." Remote Sensing 12, no. 20 (October 21, 2020): 3461. http://dx.doi.org/10.3390/rs12203461.

Full text
Abstract:
We propose an unsupervised network with adversarial learning, the Raindrop-aware GAN, which enhances the quality of coastal video images contaminated by raindrops. Raindrop removal from coastal videos faces two main difficulties: converting the degraded image into a clean one by visually removing the raindrops, and restoring the background coastal wave information in the raindrop regions. The components of the proposed network—a generator and a discriminator for adversarial learning—are trained on unpaired images degraded by raindrops and clean images free from raindrops. By creating raindrop masks and background-restored images, the generator restores the background information in the raindrop regions alone, preserving the input as much as possible. The proposed network was trained and tested on an open-access dataset and directly collected dataset from the coastal area. It was then evaluated by three metrics: the peak signal-to-noise ratio, structural similarity, and a naturalness-quality evaluator. The indices of metrics are 8.2% (+2.012), 0.2% (+0.002), and 1.6% (−0.196) better than the state-of-the-art method, respectively. In the visual assessment of the enhanced video image quality, our method better restored the image patterns of steep wave crests and breaking than the other methods. In both quantitative and qualitative experiments, the proposed method more effectively removed the raindrops in coastal video and recovered the damaged background wave information than state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Pan, I.-Hui, Ping-Sheng Huang, Te-Jen Chang, and Hsiang-Hsiung Chen. "Multilayer Reversible Information Hiding with Prediction-Error Expansion and Dynamic Threshold Analysis." Sensors 22, no. 13 (June 28, 2022): 4872. http://dx.doi.org/10.3390/s22134872.

Full text
Abstract:
The rapid development of internet and social media has driven the great requirement for information sharing and intelligent property protection. Therefore, reversible information embedding theory has marked some approaches for information security. Assuming reversibility, the original and embedded data must be completely restored. In this paper, a high-capacity and multilayer reversible information hiding technique for digital images was presented. First, the integer Haar wavelet transform scheme converted the cover image from the spatial into the frequency domain that was used. Furthermore, we applied dynamic threshold analysis, the parameters of the predicted model, the location map, and the multilayer embedding method to improve the quality of the stego image and restore the cover image. In comparison with current algorithms, the proposed algorithm often had better embedding capacity versus image quality performance.
APA, Harvard, Vancouver, ISO, and other styles
34

Doriguêtto, Paulo Victor Teixeira, Daniela de Almeida, Carolina Oliveira de Lima, Ricardo Tadeu Lopes, and Karina Lopes Devito. "Assessment of marginal gaps and image quality of crowns made of two different restorative materials: An in vitro study using CBCT images." Journal of Dental Research, Dental Clinics, Dental Prospects 16, no. 4 (December 30, 2022): 243–50. http://dx.doi.org/10.34172/joddd.2022.039.

Full text
Abstract:
Background. The present study assessed the quality of images and the presence of marginal gaps on cone-beam computed tomography (CBCT) images of teeth restored with all-ceramic and metal-ceramic crowns and compared the gap sizes observed on CBCT images with those obtained on micro-CT images. Methods. Thirty teeth restored with metal-ceramic and all-ceramic crowns, properly adapted and with gaps of 0.30 and 0.50 mm, were submitted to micro-CT and CBCT scans. Linear measurements corresponding to the marginal gap (MG) and the absolute marginal discrepancy (AMD) were obtained. The objective assessment of the quality of CBCT images was performed using the contrast-to-noise ratio (CNR), and the subjective assessment was defined by the diagnoses made by five examiners regarding the presence or absence of gaps. Results. The measurements were always higher for CBCT, with a significant difference regarding AMD. No significant difference in image quality was observed using CNR between the crowns tested. Low accuracy and sensitivity values could be observed for both crowns. Conclusion. Marginal mismatch measures were overestimated in CBCT images. No difference in image quality was observed between the crowns. The correct diagnosis of gaps was considered low, irrespective of crown type and gap size.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Yiming, Jianping Guo, Sen Yang, Ting Liu, Hualing Zhou, Mengzi Liang, Xi Li, and Dahong Xu. "Frequency Disentanglement Distillation Image Deblurring Network." Sensors 21, no. 14 (July 9, 2021): 4702. http://dx.doi.org/10.3390/s21144702.

Full text
Abstract:
Due to the blur information and content information entanglement in the blind deblurring task, it is very challenging to directly recover the sharp latent image from the blurred image. Considering that in the high-dimensional feature map, blur information mainly exists in the low-frequency region, and content information exists in the high-frequency region. In this paper, we propose a encoder–decoder model to realize disentanglement from the perspective of frequency, and we named it as frequency disentanglement distillation image deblurring network (FDDN). First, we modified the traditional distillation block by embedding the frequency split block (FSB) in the distillation block to separate the low-frequency and high-frequency region. Second, the modified distillation block, we named frequency distillation block (FDB), can recursively distill the low-frequency feature to disentangle the blurry information from the content information, so as to improve the restored image quality. Furthermore, to reduce the complexity of the network and ensure the high-dimension of the feature map, the frequency distillation block (FDB) is placed on the end of encoder to edit the feature map on the latent space. Quantitative and qualitative experimental evaluations indicate that the FDDN can remove the blur effect and improve the image quality of actual and simulated images.
APA, Harvard, Vancouver, ISO, and other styles
36

Prakash, Anju J., and Ferdinand Christopher. "An Enhanced Image Dehazing Method for the Application in Urban Computing." Journal of Computational and Theoretical Nanoscience 16, no. 2 (February 1, 2019): 793–98. http://dx.doi.org/10.1166/jctn.2019.7811.

Full text
Abstract:
The appearance of images captured in some weather conditions is often degraded due to the presence of haze or fog. These hazy images affect the visibility in the field of computer vision applications, object recognition systems, intelligent transportation, traffic analysis etc. Multi-scale Gradient enhancement is the most popular method used as part of dehazing. But manipulating the image easily turns low dynamic range into high dynamic range images. As a result the restored image become dark or overexposure so that it degrades the quality of the image. As an aim to resolve this problem a better image enhancement as well as classification method is employed. The projected work takes benefit of SVD as this solo value allow us to characterize the picture with a minimal set of value that shrink storage space and progress the quality. For a superior quality and feature improvement a rate based alteration is completed with the help of gaussian filter. The eventual dehazed image is then classified using an improved KNN as part of geographical data analysis. Overall the platform research theme focuses on two key areas-Image processing and data mining.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Wenxia, Chi Lin, Ting Luo, Hong Li, Haiyong Xu, and Lihong Wang. "Subjective and Objective Quality Evaluation for Underwater Image Enhancement and Restoration." Symmetry 14, no. 3 (March 10, 2022): 558. http://dx.doi.org/10.3390/sym14030558.

Full text
Abstract:
Since underwater imaging is affected by the complex water environment, it often leads to severe distortion of the underwater image. To improve the quality of underwater images, underwater image enhancement and restoration methods have been proposed. However, many underwater image enhancement and restoration methods produce over-enhancement or under-enhancement, which affects their application. To better design underwater image enhancement and restoration methods, it is necessary to research the underwater image quality evaluation (UIQE) for underwater image enhancement and restoration methods. Therefore, a subjective evaluation dataset for an underwater image enhancement and restoration method is constructed, and on this basis, an objective quality evaluation method of underwater images, based on the relative symmetry of underwater dark channel prior (UDCP) and the underwater bright channel prior (UBCP) is proposed. Specifically, considering underwater image enhancement in different scenarios, a UIQE dataset is constructed, which contains 405 underwater images, generated from 45 different underwater real images, using 9 representative underwater image enhancement methods. Then, a subjective quality evaluation of the UIQE database is studied. To quantitatively measure the quality of the enhanced and restored underwater images with different characteristics, an objective UIQE index (UIQEI) is used, by extracting and fusing four groups of features, including: (1) the joint statistics of normalized gradient magnitude (GM) and Laplacian of Gaussian (LOG) features, based on the underwater dark channel map; (2) the joint statistics of normalized gradient magnitude (GM) and Laplacian of Gaussian (LOG) features, based on the underwater bright channel map; (3) the saturation and colorfulness features; (4) the fog density feature; (5) the global contrast feature; these features capture key aspects of underwater images. Finally, the experimental results are analyzed, qualitatively and quantitatively, to illustrate the effectiveness of the proposed UIQEI method.
APA, Harvard, Vancouver, ISO, and other styles
38

Gong, Jun, Senlin Luo, Wenxin Yu, and Liang Nie. "Inpainting with Separable Mask Update Convolution Network." Sensors 23, no. 15 (July 26, 2023): 6689. http://dx.doi.org/10.3390/s23156689.

Full text
Abstract:
Image inpainting is an active area of research in image processing that focuses on reconstructing damaged or missing parts of an image. The advent of deep learning has greatly advanced the field of image restoration in recent years. While there are many existing methods that can produce high-quality restoration results, they often struggle when dealing with images that have large missing areas, resulting in blurry and artifact-filled outcomes. This is primarily because of the presence of invalid information in the inpainting region, which interferes with the inpainting process. To tackle this challenge, the paper proposes a novel approach called separable mask update convolution. This technique automatically learns and updates the mask, which represents the missing area, to better control the influence of invalid information within the mask area on the restoration results. Furthermore, this convolution method reduces the number of network parameters and the size of the model. The paper also introduces a regional normalization technique that collaborates with separable mask update convolution layers for improved feature extraction, thereby enhancing the quality of the restored image. Experimental results demonstrate that the proposed method performs well in restoring images with large missing areas and outperforms state-of-the-art image inpainting methods significantly in terms of image quality.
APA, Harvard, Vancouver, ISO, and other styles
39

Chung, Minkyung, Minyoung Jung, and Yongil Kim. "Enhancing Remote Sensing Image Super-Resolution Guided by Bicubic-Downsampled Low-Resolution Image." Remote Sensing 15, no. 13 (June 28, 2023): 3309. http://dx.doi.org/10.3390/rs15133309.

Full text
Abstract:
Image super-resolution (SR) is a significant technique in image processing as it enhances the spatial resolution of images, enabling various downstream applications. Based on recent achievements in SR studies in computer vision, deep-learning-based SR methods have been widely investigated for remote sensing images. In this study, we proposed a two-stage approach called bicubic-downsampled low-resolution (LR) image-guided generative adversarial network (BLG-GAN) for remote sensing image super-resolution. The proposed BLG-GAN method divides the image super-resolution procedure into two stages: LR image transfer and super-resolution. In the LR image transfer stage, real-world LR images are restored to less blurry and noisy bicubic-like LR images using guidance from synthetic LR images obtained through bicubic downsampling. Subsequently, the generated bicubic-like LR images are used as inputs to the SR network, which learns the mapping between the bicubic-like LR image and the corresponding high-resolution (HR) image. By approaching the SR problem as finding optimal solutions for subproblems, the BLG-GAN achieves superior results compared to state-of-the-art models, even with a smaller overall capacity of the SR network. As the BLG-GAN utilizes a synthetic LR image as a bridge between real-world LR and HR images, the proposed method shows improved image quality compared to the SR models trained to learn the direct mapping from a real-world LR image to an HR image. Experimental results on HR satellite image datasets demonstrate the effectiveness of the proposed method in improving perceptual quality and preserving image fidelity.
APA, Harvard, Vancouver, ISO, and other styles
40

Sun, Wei, Jianli Wu, and Haroon Rashid. "Image Enhancement Algorithm of Foggy Sky with Sky based on Sky Segmentation." Journal of Physics: Conference Series 2560, no. 1 (August 1, 2023): 012011. http://dx.doi.org/10.1088/1742-6596/2560/1/012011.

Full text
Abstract:
Abstract In recent years, image defogging has become a research hotspot in the field of digital image processing. Through defogging enhancement processing, the visual quality of foggy images can be significantly improved, and it is also an important part of subsequent image processing. To overcome the limitations of traditional image dehazing, an image enhancement algorithm for foggy image with sky based on sky segmentation is proposed. Firstly, based on K-means clustering and sky feature analysis, sky region recognition is performed for fog image with sky. Secondly, according to the pixels of sky region, the rough transmittance is corrected, and the dehazing image is obtained by dark channel prior image dehazing based on guided filter. Finally, the dehazing image is equalized by bi-histogram equalization. This algorithm effectively avoids the problem of color distortion and halo caused by the traditional dark channel prior dehazing algorithm in the sky area, and makes the restored foggy image have better global and local contrast.
APA, Harvard, Vancouver, ISO, and other styles
41

Nallasivam, Manikandaprabu, Femi D, and Raja Paulsingh J. "Histogram Based Optimized Reversible Steganographic Scheme." ECS Transactions 107, no. 1 (April 24, 2022): 19089–97. http://dx.doi.org/10.1149/10701.19089ecst.

Full text
Abstract:
A novel prediction-based Reversible Steganographic scheme based on image inpainting can be optimized by choosing the reference pixels using optimization technique. Partial differential equations based on image inpainting are introduced to generate a prediction image, from the reference image that has similar structural and geometric information as the cover image. Then the histogram of the prediction error is shifted to embed the secret bits reversibly using two selected sets of peak points and zero points. From the stego image, the cover image can be restored losslessly after extracting the embedded bits correctly. Since the same reference pixels can be exploited in the extraction procedure, the embedded secret bits can be extracted. Through optimization of reference pixels selection and the inpainting predictor, the prediction accuracy is high, and more embeddable pixels are acquired with greater embedding rate and better visual quality, and reduced time required for the stego00 image generation.
APA, Harvard, Vancouver, ISO, and other styles
42

Kim, Gyuho, Jung Gon Kim, Kitaek Kang, and Woo Sik Yoo. "Image-Based Quantitative Analysis of Foxing Stains on Old Printed Paper Documents." Heritage 2, no. 3 (September 18, 2019): 2665–77. http://dx.doi.org/10.3390/heritage2030164.

Full text
Abstract:
We studied the feasibility of image-based quantitative analysis of foxing stains on collections of old (16th–20th century) European books stored in the Rare Book Library of the Seoul National University in Korea. We were able to quantitatively determine the foxing affected areas on books from their photographs using a newly developed image processing software (PicMan) including cultural property characterization applications, specifically. Dimensional and color analysis of photographs were successfully done quantitatively. Histograms of RGB (red, green, blue) pixels of photographs clearly showed the change in color distribution of foxing stains compared to the other areas of the photographs. Several sample images of quantitative measurement of foxing stains and virtually restored images were generated to provide easy visual inspection and comparison between restored images and the original photographs. Image quality, resolution, and digital file format requirements for quantitative analysis are described. Image-based quantitative analysis of foxing stains on paper documents are found to be very promising towards automation for objective characterization of photographs of cultural properties. This technique can be used to create a cultural property digital database. Quantitative and statistical analysis techniques can be introduced to monitor the effect of storage and conservation environment on the cultural properties.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Jin-Jun, Qi-Hang Shi, Jian Zhao, Zhi-Hui Lai, and Lei-Lei Li. "Noisy Low-Illumination Image Enhancement Based on Parallel Duffing Oscillator and IMOGOA." Mathematical Problems in Engineering 2022 (September 20, 2022): 1–14. http://dx.doi.org/10.1155/2022/3903453.

Full text
Abstract:
In complex environment, the captured images face several kinds of problems, including low illumination and intensive noise, which deteriorates image quality and has a great impact on the follow-up work. In this work, inspired by stochastic resonance theory, we design a model that considers the spatial characteristics of image and noise reduction and enhancement are simultaneously realized. The 8-neighborhood pixel extraction method and the Duffing oscillator model are used to parallel process the image, and then the image details are restored by homomorphic filter. In order to optimize the parameters of parallel Duffing oscillator model and homomorphic filter adaptively, multiobjective grasshopper optimization algorithm is introduced into the method. Sobol sequence and differential mutation operators are used to improve the optimization algorithm, and the fitness function is constructed by using peak signal-to-noise ratio and standard deviation. To verify the effectiveness of the proposed method, low-illumination image data with Gaussian noise is used for subjective and objective evaluation. The experimental results show that the proposed algorithm gives prominence to useful information, which has smaller color distortion and better visual quality.
APA, Harvard, Vancouver, ISO, and other styles
44

Mori, Mio, Tomoyuki Fujioka, Mayumi Hara, Leona Katsuta, Yuka Yashima, Emi Yamaga, Ken Yamagiwa, et al. "Deep Learning-Based Image Quality Improvement in Digital Positron Emission Tomography for Breast Cancer." Diagnostics 13, no. 4 (February 20, 2023): 794. http://dx.doi.org/10.3390/diagnostics13040794.

Full text
Abstract:
We investigated whether 18F-fluorodeoxyglucose positron emission tomography (PET)/computed tomography images restored via deep learning (DL) improved image quality and affected axillary lymph node (ALN) metastasis diagnosis in patients with breast cancer. Using a five-point scale, two readers compared the image quality of DL-PET and conventional PET (cPET) in 53 consecutive patients from September 2020 to October 2021. Visually analyzed ipsilateral ALNs were rated on a three-point scale. The standard uptake values SUVmax and SUVpeak were calculated for breast cancer regions of interest. For “depiction of primary lesion”, reader 2 scored DL-PET significantly higher than cPET. For “noise”, “clarity of mammary gland”, and “overall image quality”, both readers scored DL-PET significantly higher than cPET. The SUVmax and SUVpeak for primary lesions and normal breasts were significantly higher in DL-PET than in cPET (p < 0.001). Considering the ALN metastasis scores 1 and 2 as negative and 3 as positive, the McNemar test revealed no significant difference between cPET and DL-PET scores for either reader (p = 0.250, 0.625). DL-PET improved visual image quality for breast cancer compared with cPET. SUVmax and SUVpeak were significantly higher in DL-PET than in cPET. DL-PET and cPET exhibited comparable diagnostic abilities for ALN metastasis.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Zhixin, Degang Kong, and Yongchun Zheng. "Artificial Intelligence Registration of Image Series Based on Multiple Features." Traitement du Signal 39, no. 1 (February 28, 2022): 221–27. http://dx.doi.org/10.18280/ts.390122.

Full text
Abstract:
Multi-source image series vary in quality. To fuse the feature information of multi-source image series, it is necessary to deeply explore the relevant registration and fusion techniques. The existing techniques of image registration and fusion lack a unified multi-feature-based algorithm framework, and fail to achieve real-time accurate registration. To solve these problems, this paper probes into the artificial intelligence (AI) registration of image series based on multiple features. Firstly, the Harris corner detector was selected to extract the corners of multi-source image series, before explaining and improving the flow of the algorithm. In addition, the deep convolutional neural network (DCNN) VGG16 was improved to extract the features of multi-source image series. Finally, the spatial transformation network was adopted to pre-register the image series, and the image series was deformed and restored based on the region-constrained moving least squares. The proposed registration algorithm was proved effective through experiments.
APA, Harvard, Vancouver, ISO, and other styles
46

Ye, G., J. Pan, M. Wang, Y. Zhu, and S. Jin. "ANALYSIS: IMPACT OF IMAGE MATCHING METHODS ON JITTER COMPENSATION." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2022 (May 17, 2022): 611–18. http://dx.doi.org/10.5194/isprs-annals-v-3-2022-611-2022.

Full text
Abstract:
Abstract. The degradation of image quality caused by satellite jitter has drawn attention, and many researches have illustrated the importance of the jitter compensation. As the essential component of jitter compensation, image matching involves in the determination of jitter processing accuracy. Hence, the impact of imaging matching methods on jitter compensation is explored in this paper. At first, a framework based on imaging matching is built for jitter compensation. Two typical sub-pixel accuracy matching methods (i.e. correlation coefficient and least squares matching, as well as phase correlation matching) could then be served for the framework. The experiments are designed by using multispectral images of GF-1 satellite, and quantitative evaluations show that compared with correlation coefficient and least squares matching, phase matching method makes the accuracy of obtained jitter curve increase with the amplitude error decreasing by more than 0.012 pixel as well as phase error decreasing by more than 0.0119 rad, and makes the quality of restored images improved on both geometry and radiation. It indicates that phase matching method has better performance with respect to the framework of jitter compensation in this paper.
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Tzu-Chao, and Pao-Ta Yu. "Adaptive Two-Pass Median Filter Based on Support Vector Machines for Image Restoration." Neural Computation 16, no. 2 (February 1, 2004): 333–54. http://dx.doi.org/10.1162/neco.2004.16.2.333.

Full text
Abstract:
In this letter, a novel adaptive filter, the adaptive two-pass median (ATM) filter based on support vector machines (SVMs), is proposed to preserve more image details while effectively suppressing impulse noise for image restoration. The proposed filter is composed of a noise decision maker and two-pass median filters. Our new approach basically uses an SVM impulse detector to judge whether the input pixel is noise. If a pixel is detected as a corrupted pixel, the noise-free reduction median filter will be triggered to replace it. Otherwise, it remains unchanged. Then, to improve the quality of the restored image, a decision impulse filter is put to work in the second-pass filtering procedure. As for the noise suppressing both fixed-valued and random-valued impulses without degrading the quality of the fine details, the results of our extensive experiments demonstrate that the proposed filter outperforms earlier median-based filters in the literature. Our new filter also provides excellent robustness at various percentages of impulse noise.
APA, Harvard, Vancouver, ISO, and other styles
48

Lv, Guomian, Hao Xu, Huajun Feng, Zhihai Xu, Hao Zhou, Qi Li, and Yueting Chen. "A Full-Aperture Image Synthesis Method for the Rotating Rectangular Aperture System Using Fourier Spectrum Restoration." Photonics 8, no. 11 (November 22, 2021): 522. http://dx.doi.org/10.3390/photonics8110522.

Full text
Abstract:
The novel rotating rectangular aperture (RRA) system provides a good solution for space-based, large-aperture, high-resolution imaging tasks. Its imaging quality depends largely on the image synthesis algorithm, and the mainstream multi-frame deblurring approach is sophisticated and time-consuming. In this paper, we propose a novel full-aperture image synthesis algorithm for the RRA system, based on Fourier spectrum restoration. First, a numerical simulation model is established to analyze the RRA system’s characteristics and obtain the point spread functions (PSFs) rapidly. Then, each image is used iteratively to calculate the increment size and update the final restored Fourier spectrum. Both the simulation’s results and the practical experiment’s results show that our algorithm performs well in terms of objective evaluation and time consumption.
APA, Harvard, Vancouver, ISO, and other styles
49

Xue, Hong Ye, and Wei Li Ma. "Research on Image Restoration Algorithm Base on ACO-BP Neural Network." Key Engineering Materials 460-461 (January 2011): 136–41. http://dx.doi.org/10.4028/www.scientific.net/kem.460-461.136.

Full text
Abstract:
This paper studies the traits of Ant Colony Algorithm and BP neural network, at the same time it combines the ant colony optimization algorithm with BP neural network and applies them at the image restoration. This algorithm solves some problems of BP, such that the BP algorithm gets in local minimum easily, the speed of convergence is slowly and sometimes brings oscillation effect etc. that is reason the quality of restored image can be improved significantly. Besides, the article details ACO-BP algorithm’s theory and steps, and apply the improved algorithm in the image restoration. which reduces the MSE(Mean Square Error) of the optimization algorithm, and makes the speed of convergence of BP neural network faster. This algorithm is validated validly by the method of Simulation .
APA, Harvard, Vancouver, ISO, and other styles
50

Abdulwahab Farajalla Ali, Nawafil, Imad Fakhri Taha Al-Shaikhli, and Raini Hasan. "Detection And Restoration of Cracked Digitized Paintings and Manuscripts Using Image Processing." International Journal of Engineering & Technology 7, no. 2.34 (June 8, 2018): 39. http://dx.doi.org/10.14419/ijet.v7i2.34.13907.

Full text
Abstract:
Ancient paintings are cultural heritage that can be preserved via computer aided analysis and processing. These paintings deteriorate due to undesired cracks, which are caused by aging, drying up of painting material, and mechanical factors. These heritages need to be restored to their respective original or near-original states. There are different techniques and methodologies that can be used to conserve and restore the overall quality of these images. The main objective of this study is to analyze techniques and methodologies that have been developed for the detection, classification of small patterns, and restoration of cracks in digitized old painting and manuscripts. The purpose of the developed algorithm is to identify cracks using the thresholding operation, which was the output of the top-hat transform morphology. Afterwards, the breaks, which were wrongly identified as cracks, were separated for utilization in a semi-automatic procedure based on region growth. Finally, both the median filter and weighted median techniques were applied to fill the cracks and enhance image quality.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography