To see the other types of publications on this topic, follow the link: Image distortion.

Journal articles on the topic 'Image distortion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image distortion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pollak, C., T. Stubbings, and H. Hutter. "Differential Image Distortion Correction." Microscopy and Microanalysis 7, no. 4 (July 2001): 335–40. http://dx.doi.org/10.1007/s10005-001-0007-1.

Full text
Abstract:
AbstractImaging techniques often suffer from distortion effects. Former methods of reducing these distortions have been based either on improving the imaging technique (i.e., to avoid distortions) or on the use of reference samples (i.e., to determine the distortion field by imaging of a known structure. We present a novel method of correcting image distortion by evaluating the imaged position changes due to two small sample position shifts. The algorithm allows us to calculate a vector field, which enables us to determine the “undistorted” position of any point of the image. The presented method has very low presuppositions about the sample, requires no reference samples, and is applicable to any type of image distortion. In addition to the presentation of the method's theoretical basis and a description of the computational method, we present corrected secondary ion mass spectroscopy (SIMS) images of a regular structure (a copper grid) as well as a stochastic distribution (sodium impurities) to show the results of empirical data.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Xiaohong, Qian Sun, and Jun Hu. "Generation of Complete SAR Geometric Distortion Maps Based on DEM and Neighbor Gradient Algorithm." Applied Sciences 8, no. 11 (November 9, 2018): 2206. http://dx.doi.org/10.3390/app8112206.

Full text
Abstract:
Radar-specific imaging geometric distortions (including foreshortening, layover, and shadow) that occur in synthetic aperture radar (SAR) images acquired over mountainous areas have a negative impact on the suitability of the interferometric SAR (InSAR) technique to monitor landslides. To address this issue, many distortion simulation methods have been presented to predict the areas in which distortions will occur before processing the SAR image. However, the layover and shadow regions are constituted by active as well as passive subregions. Since passive distortions are caused by active distortions and can occur in the flat area, it is difficult to distinguish the transition zone between passive distortion and non-distortion areas. In addition, passive distortion could cover part of the foreshortening or active layover/shadow areas but has generally been ignored. Therefore, failure to simulate passive distortion leads to incomplete simulated distortions. In this paper, an algorithm to define complete SAR geometric distortions and correct the boundaries among different distortions is presented based on the neighbor gradient between the passive and active distortions. It is an image-processing routine applied to a digital elevation model (DEM) of the terrain to be imaged by the available SAR data. The performance of the proposed method has been validated by the ascending and descending Advanced Land Observing Satellite (ALOS) Phased Array type L-band Synthetic Aperture Radar (PALSAR) images acquired over the Chongqing mountainous area of China. Through the investigation of passive distortion, we can have a deeper understanding of the formation and characteristics of these distortions. Moreover, it provides very meaningful information for research on areas such as landslide monitoring.
APA, Harvard, Vancouver, ISO, and other styles
3

Ekpar, Frank, Masaaki Yoneda, and Hiroyuki Hase. "Correcting Distortions in Panoramic Images Using Constructive Neural Networks." International Journal of Neural Systems 13, no. 04 (August 2003): 239–50. http://dx.doi.org/10.1142/s0129065703001601.

Full text
Abstract:
This paper presents a novel approach to the correction of panoramic (wide-angle) image distortions. Unlike traditional methods that separate the distortion of the panoramic image into radial and tangential components and then concentrate on the correction of one type of distortion at a time, the method presented in this paper uses an integrated approach that simultaneously corrects all non-linear distortions of the panoramic image. The system uses data obtained from carefully constructed calibration patterns to automatically build and train a constructive neural network of suitable complexity to approximate the characteristic distortion of the panoramic image. The trained neural network is then used to correct the distortions represented by the sample data. It is demonstrated that by applying the distortion correction method presented in this paper to panoramic images representing real world scenes, perspective-corrected views of the real world scene that are usable in a wide variety of applications can be generated.
APA, Harvard, Vancouver, ISO, and other styles
4

Asatryan, D. G., M. E. Harutyunyan, Y. I. Golub, and V. V. Starovoitov. "Influence of the distortion type on the image quality assessment when reducing its sizes." «System analysis and applied information science», no. 3 (September 25, 2020): 22–27. http://dx.doi.org/10.21122/2309-4923-2020-3-22-27.

Full text
Abstract:
In this paper, the influence of various types of distortion of an image on its quality while reducing its sizes, is investigated. To assess the image quality, it is proposed to use the method of comparison with the standard using a previously developed measure based on the proximity of the values of the parameters of the Weibull distribution, which describes the gradient field of the image. The well-known TID2013 image database was used as the material, which includes 3000 images distorted by 24 types of distorting algorithms with five levels. Each image of the base was reduced by 2, 4 and 8 times by the two most common methods and compared with the original image-original. The calculations were performed for five types of distortions implemented in the database. To make a decision on the acceptability of the applied quality measure, the calculated measure values were compared with the subjective quality ratings provided along with the documentation on the TID2013 database. The comparison was carried out using Spearman’s correlation coefficient. It is shown that the average values of correlations for all images at three types of distortions are very high, while for the other two they are unacceptably low. An attempt has been made to explain this situation by the properties of distorting algorithms that change the structural properties of the image to varying degrees.The possibility of comparing images of the same scene, but with different resolutions, is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
5

Jung, Young-Hwa, Gyuho Kim, and Woo Sik Yoo. "Study on Distortion Compensation of Underwater Archaeological Images Acquired through a Fisheye Lens and Practical Suggestions for Underwater Photography - A Case of Taean Mado Shipwreck No. 1 and No. 2 -." Journal of Conservation Science 37, no. 4 (August 31, 2021): 312–21. http://dx.doi.org/10.12654/jcs.2021.37.4.01.

Full text
Abstract:
Underwater archaeology relies heavily on photography and video image recording during surveillances and excavations like ordinary archaeological studies on land. All underwater images suffer poor image quality and distortions due to poor visibility, low contrast and blur, caused by differences in refractive indices of water and air, properties of selected lenses and shapes of viewports. In the Yellow Sea (between mainland China and the Korean peninsula), the visibility underwater is far less than 1 m, typically in the range of 30 cm to 50 cm, on even a clear day, due to very high turbidity. For photographing 1 m x 1 m grids underwater, a very wide view angle (180o) fisheye lens with an 8 mm focal length is intentionally used despite unwanted severe barrel-shaped image distortion, even with a dome port camera housing. It is very difficult to map wide underwater archaeological excavation sites by combining severely distorted images. Development of practical compensation methods for distorted underwater images acquired through the fisheye lens is strongly desired. In this study, the source of image distortion in underwater photography is investigated. We have identified the source of image distortion as the mismatching, in optical axis and focal points, between dome port housing and fisheye lens. A practical image distortion compensation method, using customized image processing software, was explored and verified using archived underwater excavation images for effectiveness in underwater archaeological applications. To minimize unusable area due to severe distortion after distortion compensation, practical underwater photography guidelines are suggested.
APA, Harvard, Vancouver, ISO, and other styles
6

Peng, Fu Qiang, Qiang Chen, and Jun Wei Bao. "Distortion Correction for the Gun Barrel Bore Panoramic Image." Applied Mechanics and Materials 427-429 (September 2013): 680–85. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.680.

Full text
Abstract:
Single Reflector Panoramic Imaging System (SRPIS) has been widely used because of its advantages such as simple structure, fast imaging, integration and miniaturization. It can observe objects around the reflector mirror, which fits for the quality inspection of gun barrel bore. However, its images often suffer from serious distortions in radial and tangential directions. Therefore, to ensure the accuracy of captured images, distortion must be eliminated. In this paper, a distortion correction method is proposed based on the imaging characteristics of SRPIS. Firstly the relationship between the height of a certain point on the gun barrel bore and the radius of image point is derived. Then the correction model is built based on the relationship. Aiming at the captured annular image, a new chessboard corner detection algorithm is proposed. The correction parameters are obtained by using the algorithm according to the labeled image. The real experiment results demonstrate that the correction effects of radial and tangential distortions are satisfactory. The error is controlled at sub-pixel level.
APA, Harvard, Vancouver, ISO, and other styles
7

Belov, A. M., and A. Y. Denisova. "Scene distortion detection algorithm using multitemporal remote sensing images." Computer Optics 43, no. 5 (October 2019): 869–85. http://dx.doi.org/10.18287/2412-6179-2019-43-5-869-885.

Full text
Abstract:
Multitemporal remote sensing images of a particular territory might include accidental scene distortions. Scene distortion is a significant local brightness change caused by the scene overlap with some opaque object or a natural phenomenon coincident with the moment of image capture, for example, clouds and shadows. The fact that different images of the scene are obtained at different instants of time makes the appearance, location and shape of scene distortions accidental. In this article we propose an algorithm for detecting accidental scene distortions using a dataset of multitemporal remote sensing images. The algorithm applies superpixel segmentation and anomaly detection methods to get binary images of scene distortion location for each image in the dataset. The algorithm is adapted to handle images with different spectral and spatial sampling parameters, which makes it more multipurpose than the existing solutions. The algorithm's quality was assessed using model images with scene distortions for two remote sensing systems. The experiments showed that the proposed algorithm with the optimal settings can reach a detection accuracy of about 90% and a false detection error of about 10%.
APA, Harvard, Vancouver, ISO, and other styles
8

Saifeldeen, Abdalmajeed, Shu Hong Jiao, and Wei Liu. "Entirely Blind Image Quality Assessment Estimator." Applied Mechanics and Materials 543-547 (March 2014): 2496–99. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.2496.

Full text
Abstract:
Prior knowledge about anticipated distortions and their corresponding human opinion scores is needed in the most general purpose no-reference image quality assessment algorithms. When creating the model, all distortion types may not be existed. Predicting the quality of distorted images in practical no-reference image quality assessment algorithms is devised without prior knowledge about images or their distortions. In this study, a blind/no-reference opinion and distortion unaware image quality assessment algorithm based on natural scenes is developed. The proposed approach uses a set of novel features to measure image quality in a spatial domain. The extracted features which are from the scenes gist are formed using Weibull distribution statistics. When testing the proposed algorithm on LIVE database, experiments show that it correlates well with subjective opinion scores. They also show that the proposed algorithm significantly outperforms the popular full-reference peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) methods. Not only do the results reasonably well compete with the recently developed natural image quality evaluator (NIQE) model, but also outperform it.
APA, Harvard, Vancouver, ISO, and other styles
9

Yahanda, Alexander T., Timothy J. Goble, Peter T. Sylvester, Gretchen Lessman, Stanley Goddard, Bridget McCollough, Amar Shah, Trevor Andrews, Tammie L. S. Benzinger, and Michael R. Chicoine. "Impact of 3-Dimensional Versus 2-Dimensional Image Distortion Correction on Stereotactic Neurosurgical Navigation Image Fusion Reliability for Images Acquired With Intraoperative Magnetic Resonance Imaging." Operative Neurosurgery 19, no. 5 (June 10, 2020): 599–607. http://dx.doi.org/10.1093/ons/opaa152.

Full text
Abstract:
Abstract BACKGROUND Fusion of preoperative and intraoperative magnetic resonance imaging (iMRI) studies during stereotactic navigation may be very useful for procedures such as tumor resections but can be subject to error because of image distortion. OBJECTIVE To assess the impact of 3-dimensional (3D) vs 2-dimensional (2D) image distortion correction on the accuracy of auto-merge image fusion for stereotactic neurosurgical images acquired with iMRI using a head phantom in different surgical positions. METHODS T1-weighted intraoperative images of the head phantom were obtained using 1.5T iMRI. Images were postprocessed with 2D and 3D image distortion correction. These studies were fused to T1-weighted preoperative MRI studies performed on a 1.5T diagnostic MRI. The reliability of the auto-merge fusion of these images for 2D and 3D correction techniques was assessed both manually using the stereotactic navigation system and via image analysis software. RESULTS Eight surgical positions of the head phantom were imaged with iMRI. Greater image distortion occurred with increased distance from isocenter in all 3 axes, reducing accuracy of image fusion to preoperative images. Visually reliable image fusions were accomplished in 2/8 surgical positions using 2D distortion correction and 5/8 using 3D correction. Three-dimensional correction yielded superior image registration quality as defined by higher maximum mutual information values, with improvements ranging between 2.3% and 14.3% over 2D correction. CONCLUSION Using 3D distortion correction enhanced the reliability of surgical navigation auto-merge fusion of phantom images acquired with iMRI across a wider range of head positions and may improve the accuracy of stereotactic navigation using iMRI images.
APA, Harvard, Vancouver, ISO, and other styles
10

Archip, Neculai, Olivier Clatz, Stephen Whalen, Simon P. DiMaio, Peter M. Black, Ferenc A. Jolesz, Alexandra Golby, and Simon K. Warfield. "Compensation of Geometric Distortion Effects on Intraoperative Magnetic Resonance Imaging for Enhanced Visualization in Image-guided Neurosurgery." Operative Neurosurgery 62, suppl_1 (March 1, 2008): ONS209—ONS216. http://dx.doi.org/10.1227/01.neu.0000317395.08466.e6.

Full text
Abstract:
Abstract Objective: Preoperative magnetic resonance imaging (MRI), functional MRI, diffusion tensor MRI, magnetic resonance spectroscopy, and positron-emission tomographic scans may be aligned to intraoperative MRI to enhance visualization and navigation during image-guided neurosurgery. However, several effects (both machine- and patient-induced distortions) lead to significant geometric distortion of intraoperative MRI. Therefore, a precise alignment of these image modalities requires correction of the geometric distortion. We propose and evaluate a novel method to compensate for the geometric distortion of intraoperative 0.5-T MRI in image-guided neurosurgery. Methods: In this initial pilot study, 11 neurosurgical procedures were prospectively enrolled. The scheme used to correct the geometric distortion is based on a nonrigid registration algorithm introduced by our group. This registration scheme uses image features to establish correspondence between images. It estimates a smooth geometric distortion compensation field by regularizing the displacements estimated at the correspondences. A patient-specific linear elastic material model is used to achieve the regularization. The geometry of intraoperative images (0.5 T) is changed so that the images match the preoperative MRI scans (3 T). Results: We compared the alignment between preoperative and intraoperative imaging using 1) only rigid registration without correction of the geometric distortion, and 2) rigid registration and compensation for the geometric distortion. We evaluated the success of the geometric distortion correction algorithm by measuring the Hausdorff distance between boundaries in the 3-T and 0.5-T MRIs after rigid registration alone and with the addition of geometric distortion correction of the 0.5-T MRI. Overall, the mean magnitude of the geometric distortion measured on the intraoperative images is 10.3 mm with a minimum of 2.91 mm and a maximum of 21.5 mm. The measured accuracy of the geometric distortion compensation algorithm is 1.93 mm. There is a statistically significant difference between the accuracy of the alignment of preoperative and intraoperative images, both with and without the correction of geometric distortion (P < 0.001). Conclusion: The major contributions of this study are 1) identification of geometric distortion of intraoperative images relative to preoperative images, 2) measurement of the geometric distortion, 3) application of nonrigid registration to compensate for geometric distortion during neurosurgery, 4) measurement of residual distortion after geometric distortion correction, and 5) phantom study to quantify geometric distortion.
APA, Harvard, Vancouver, ISO, and other styles
11

Feng, Liang Shan, Ying Ping Huang, Zhu Kai Xu, and Yong Zhang. "Image Calibration for Machine Vision Inspection System." Applied Mechanics and Materials 556-562 (May 2014): 2841–45. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.2841.

Full text
Abstract:
Machine vision system has been widely used for a variety of applications in industrial testing and measurements. However, image distortion caused by optical system and system re-positioning will bring errors to the machine vision detection system. This paper addresses on three types of image distortions including optical distortion and perspective deformation, image translation and rotation, and image scale change. To rectify optical distortion and perspective deformation, a black and white grid pattern is used as a standard template for finding the multiple matching points between distorted image points and ideal image points, and then a polynomial mathematical model simulating the geometric distortion is established. The distortion coefficients are calculated from the least square method. Image translation and rotation are compensated by using a floating fixture origin as the reference point. Image scale change is remedied by using a standard scale factor to shrink or enlarge an actual image to its standard size. The experimental results have demonstrated the effectiveness of the approach proposed.
APA, Harvard, Vancouver, ISO, and other styles
12

BROWN, MICHAEL S., and YAU-CHAT TSOI. "DISTORTION REMOVAL FOR CAMERA-IMAGED PRINT MATERIALS USING BOUNDARY INTERPOLATION." International Journal of Image and Graphics 05, no. 02 (April 2005): 311–28. http://dx.doi.org/10.1142/s021946780500177x.

Full text
Abstract:
Camera-based imaging is used to digitize printed materials whose shape and size are not suitable for flatbed and Xerox scanning. Because these materials are not physically pressed flat before imaging, the resulting imaged content often appears distorted due to the material's underlying shape. We present a novel approach to correct common image distortions that arise in camera-imaged printed materials. Our approach uses the boundary information of the imaged material to compute a corrective warp to undo the distortion. Our algorithm is unique in that it simultaneously corrects for a variety of distortions including skew, binder curl, and distortions from folds. In addition, 2.5D information about the boundary can be incorporated to compensate for depth distortion.
APA, Harvard, Vancouver, ISO, and other styles
13

Hwang, Alex D., and Eli Peli. "Stereoscopic Three-dimensional Optic Flow Distortions Caused by Mismatches Between Image Acquisition and Display Parameters." Journal of Imaging Science and Technology 63, no. 6 (November 1, 2019): 60412–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2019.63.6.060412.

Full text
Abstract:
Abstract We analyzed the impact of common stereoscopic three-dimensional (S3D) depth distortion on S3D optic flow in virtual reality environments. The depth distortion is introduced by mismatches between the image acquisition and display parameters. The results show that such S3D distortions induce large S3D optic flow distortions and may even induce partial/full optic flow reversal within a certain depth range, depending on the viewer’s moving speed and the magnitude of S3D distortion. We hypothesize that the S3D optic flow distortion may be a source of intra-sensory conflict that could be a source of visually induced motion sickness.
APA, Harvard, Vancouver, ISO, and other styles
14

Sazzad, Z. M. Parvez, Roushain Akhter, J. Baltes, and Y. Horita. "Objective No-Reference Stereoscopic Image Quality Prediction Based on 2D Image Features and Relative Disparity." Advances in Multimedia 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/256130.

Full text
Abstract:
Stereoscopic images are widely used to enhance the viewing experience of three-dimensional (3D) imaging and communication system. In this paper, we propose an image feature and disparity dependent quality evaluation metric, which incorporates human visible system characteristics. We believe perceived distortions and disparity of any stereoscopic image are strongly dependent on local features, such as edge (i.e., nonplane areas of an image) and nonedge (i.e., plane areas of an image) areas within the image. Therefore, a no-reference perceptual quality assessment method is developed for JPEG coded stereoscopic images based on segmented local features of distortions and disparity. Local feature information such as edge and non-edge area based relative disparity estimation, as well as the blockiness and the edge distortion within the block of images are evaluated in this method. Subjective stereo image database is used for evaluation of the metric. The subjective experiment results indicate that our metric has sufficient prediction performance.
APA, Harvard, Vancouver, ISO, and other styles
15

Del Gallego, Neil Patrick, Joel Ilao, and Macario Cordel. "Blind First-Order Perspective Distortion Correction Using Parallel Convolutional Neural Networks." Sensors 20, no. 17 (August 30, 2020): 4898. http://dx.doi.org/10.3390/s20174898.

Full text
Abstract:
In this work, we present a network architecture with parallel convolutional neural networks (CNN) for removing perspective distortion in images. While other works generate corrected images through the use of generative adversarial networks or encoder-decoder networks, we propose a method wherein three CNNs are trained in parallel, to predict a certain element pair in the 3×3 transformation matrix, M^. The corrected image is produced by transforming the distorted input image using M^−1. The networks are trained from our generated distorted image dataset using KITTI images. Experimental results show promise in this approach, as our method is capable of correcting perspective distortions on images and outperforms other state-of-the-art methods. Our method also recovers the intended scale and proportion of the image, which is not observed in other works.
APA, Harvard, Vancouver, ISO, and other styles
16

De, Kanjar, and Masilamani V. "NO-REFERENCE IMAGE QUALITY MEASURE FOR IMAGES WITH MULTIPLE DISTORTIONS USING RANDOM FORESTS FOR MULTI METHOD FUSION." Image Analysis & Stereology 37, no. 2 (July 9, 2018): 105. http://dx.doi.org/10.5566/ias.1534.

Full text
Abstract:
Over the years image quality assessment is one of the active area of research in image processing. Distortion in images can be caused by various sources like noise, blur, transmission channel errors, compression artifacts etc. Image distortions can occur during the image acquisition process (blur/noise), image compression (ringing and blocking artifacts) or during the transmission process. A single image can be distorted by multiple sources and assessing quality of such images is an extremely challenging task. The human visual system can easily identify image quality in such cases, but for a computer algorithm performing the task of quality assessment is a very difficult. In this paper, we propose a new no-reference image quality assessment for images corrupted by more than one type of distortions. The proposed technique is compared with the best-known framework for image quality assessment for multiply distorted images and standard state of the art Full reference and No-reference image quality assessment techniques available.
APA, Harvard, Vancouver, ISO, and other styles
17

Reino, Anthony J., William Lawson, Baxter J. Garcia, and Robert J. Greenstein. "Three Dimensional Video Imaging for Endoscopic Sinus Surgery and Diagnosis." American Journal of Rhinology 9, no. 4 (July 1995): 197–202. http://dx.doi.org/10.2500/105065895781873746.

Full text
Abstract:
Technological advances in video imaging over the last decade have resulted in remarkable additions to the armamentarium of instrumentation for the otolaryngologist. The use of video cameras and computer generated imaging in the operating room and office is invaluable for documentation and teaching purposes. Despite the obvious advantages of these systems, problems are evident, the most serious of which include image distortion and inability to judge depth of field. For more than 6 decades 3D imaging has been neither technically nor commercially successful. Reasons include alignment difficulties and image distortion. The result is “visual fatigue,” usually in about 15 minutes. At its extreme, this may be characterized by headache, nausea, and even vomiting. In this study, we employed the first 3D video imager to electronically manipulate a single video source to produce 3D images; therefore, neither alignment nor image distortions were produced. Of interest to the clinical surgeon, “visual fatigue” does not seem to occur; however, with prolonged procedures (greater than 2 hours) there exists the potential for physical intolerance for some individuals. This is the first unit that is compatible with any rigid or flexible videoendoscopic system and the small diameter endoscopes available for endoscopic sinus surgery. Moreover, prerecorded 2D tapes may be viewed in 3D on an existing VCR. The 3D image seems to provide enhanced anatomic awareness with less image distortion. We have found this system to be optically superior to the 2D video imagers currently available.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Yin, Xuehan Bai, Junhua Yan, Yongqi Xiao, C. R. Chatwin, R. C. D. Young, and P. Birch. "No-Reference Image Quality Assessment Based on Multi-Order Gradients Statistics." Journal of Imaging Science and Technology 64, no. 1 (January 1, 2020): 10505–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2020.64.1.010505.

Full text
Abstract:
Abstract A new blind image quality assessment method called No-Reference Image Quality Assessment Based on Multi-Order Gradients Statistics is proposed, which is aimed at solving the problem that the existing no-reference image quality assessment methods cannot determine the type of image distortion and that the quality evaluation has poor robustness for different types of distortion. In this article, an 18-dimensional image feature vector is constructed from gradient magnitude features, relative gradient orientation features, and relative gradient magnitude features over two scales and three orders on the basis of the relationship between multi-order gradient statistics and the type and degree of image distortion. The feature matrix and distortion types of known distorted images are used to train an AdaBoost_BP neural network to determine the image distortion type; the feature matrix and subjective scores of known distorted images are used to train an AdaBoost_BP neural network to determine the image distortion degree. A series of comparative experiments were carried out using Laboratory of Image and Video Engineering (LIVE), LIVE Multiply Distorted Image Quality, Tampere Image, and Optics Remote Sensing Image databases. Experimental results show that the proposed method has high distortion type judgment accuracy and that the quality score shows good subjective consistency and robustness for all types of distortion. The performance of the proposed method is not constricted to a particular database, and the proposed method has high operational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
19

Dadras Javan, F., F. Samadzadegan, S. Mehravar, and A. Toosi. "A REVIEW ON SPATIAL QUALITY ASSESSMENT METHODS FOR EVALUATION OF PAN-SHARPENED SATELLITE IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 255–61. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-255-2019.

Full text
Abstract:
Abstract. Nowadays, high-resolution fused satellite imagery is widely used in multiple remote sensing applications. Although the spectral quality of pan-sharpened images plays an important role in many applications, spatial quality becomes more important in numerous cases. The high spatial quality of the fused image is essential for extraction, identification and reconstruction of significant image objects, and will result in producing high-quality large scale maps especially in the urban areas. This paper introduces the most sensitive and effective methods in detecting the spatial distortion of fused images by implementing a number of spatial quality assessment indices that are utilized in the field of remote sensing and image processing. In this regard, in order to recognize the ability of quality assessment indices for detecting the spatial distortion quantity of fused images, input images of the fusion process are affected by some intentional spatial distortions based on non-registration error. The capabilities of the investigated metrics are evaluated on four different fused images derived from Ikonos and WorldView-2 initial images. Achieved results obviously explicate that two methods namely Edge Variance Distortion and the spatial component of QNR metric called Ds are more sensitive and responsive to the imported errors.
APA, Harvard, Vancouver, ISO, and other styles
20

Pollak, C., T. Stubbings, and H. Hutter. "Differential Image Distortion Correction." Microscopy and Microanalysis 7, no. 04 (July 2001): 335–40. http://dx.doi.org/10.1017/s1431927601010327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chakraborty, Dev P. "Image intensifier distortion correction." Medical Physics 14, no. 2 (March 1987): 249–52. http://dx.doi.org/10.1118/1.596078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ponomarenko, Mykola, Oleg Ieremeiev, Vladimir Lukin, and Karen Egiazarian. "An expandable image database for evaluation of full-reference image visual quality metrics." Electronic Imaging 2020, no. 10 (January 26, 2020): 137–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.10.ipas-137.

Full text
Abstract:
Traditional approach to collect mean opinion score (MOS) values for evaluation of full-reference image quality metrics has two serious drawbacks. The first drawback is a nonlinearity of MOS, only partially compensated by the use of rank order correlation coefficients in a further analysis. The second drawback are limitations on number of distortion types and distortion levels in image database imposed by a maximum allowed time to carry out an experiment. One of the largest of databases used for this purpose, TID2013, has almost reached these limitations, which makes an extension of TID2013 within the boundaries of this approach to be practically unfeasible. In this paper, a novel methodology to collect MOS values, with a possibility to infinitely increase a size of a database by adding new types of distortions, is proposed. For the proposed methodology, MOS values are collected for pairs of distortions, one of them being a signal dependent Gaussian noise. A technique of effective linearization and normalization of MOS is described. Extensive experiments for linearization of MOS values to extend TID2013 database are carried out.
APA, Harvard, Vancouver, ISO, and other styles
23

Mahmoudi, Arshiya, Mehdi Sabzehparvar, and Mahdi Mortazavi. "A virtual environment for evaluation of computer vision algorithms under general airborne camera imperfections." Journal of Navigation 74, no. 4 (March 23, 2021): 801–21. http://dx.doi.org/10.1017/s0373463321000060.

Full text
Abstract:
AbstractThis paper describes a camera simulation framework for validating machine vision algorithms under general airborne camera imperfections. Lens distortion, image delay, rolling shutter, motion blur, interlacing, vignetting, image noise, and light level are modelled. This is the first simulation that considers all temporal distortions jointly, along with static lens distortions in an online manner. Several innovations are proposed including a motion tracking system allowing the camera to follow the flight log with eligible derivatives. A reverse pipeline, relating each pixel in the output image to pixels in the ideal input image, is developed. It is shown that the inverse lens distortion model and the inverse temporal distortion models are decoupled in this way. A short-time pixel displacement model is proposed to solve for temporal distortions (i.e. delay, rolling shutter, motion blur, and interlacing). Evaluation is done by several means including regenerating an airborne dataset, regenerating the camera path on a calibration pattern, and evaluating the ability of the time displacement model to predict other frames. Qualitative evaluations are also made.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Yuan Jin, Hua Zhong Shu, Tao Wang, Yang Wang, and You Yong Kong. "An Integrated Method for XRII Image Distortion Correction." Applied Mechanics and Materials 742 (March 2015): 252–56. http://dx.doi.org/10.4028/www.scientific.net/amm.742.252.

Full text
Abstract:
The distorted X-Ray Image Intensifier (XRII) image can introduce negative effect on following work for C-arm CT imaging system. In this paper, we propose an integrated approach based on least squares and Biharmonic spline interpolation to correct geometric distortions of XRII images. The method first uses morphology operation to extract the coordinate values of control points. Then the least square method fits the extracted coordinate values in every row and computes the more coordinate values by fixing the length in every row. Finally, The Biharmonic spline interpolation is used to interpolate the all coordinate values and correct the distortional XRII image. The experiment shows that the integrated method can effectively correct the distorted XRII image.
APA, Harvard, Vancouver, ISO, and other styles
25

Gupta, Praful, Christos Bampis, Jack Glover, Nicholas Paulter, and Alan Bovik. "Multivariate Statistical Approach to Image Quality Tasks." Journal of Imaging 4, no. 10 (October 12, 2018): 117. http://dx.doi.org/10.3390/jimaging4100117.

Full text
Abstract:
Many existing natural scene statistics-based no reference image quality assessment (NR IQA) algorithms employ univariate parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here, we propose a multivariate model of natural image coefficients expressed in the bandpass spatial domain that has the potential to capture higher order correlations that may be induced by the presence of distortions. We analyze how the parameters of the multivariate model are affected by different distortion types, and we show their ability to capture distortion-sensitive image quality information. We also demonstrate the violation of Gaussianity assumptions that occur when locally estimating the energies of distorted image coefficients. Thus, we propose a generalized Gaussian-based local contrast estimator as a way to implement non-linear local gain control, which facilitates the accurate modeling of both pristine and distorted images. We integrate the novel approach of generalized contrast normalization with multivariate modeling of bandpass image coefficients into a holistic NR IQA model, which we refer to as multivariate generalized contrast normalization (MVGCN). We demonstrate the improved performance of MVGCN on quality-relevant tasks on multiple imaging modalities, including visible light image quality prediction and task success prediction on distorted X-ray images.
APA, Harvard, Vancouver, ISO, and other styles
26

Xing, Ji Sheng, Gang Wang, Shi Gang Wang, Wen Hai Zhuang, and Jin Zhong Wu. "CCD Image Geometrical Distortion Correct and Large Touch Screen." Advanced Materials Research 488-489 (March 2012): 1323–28. http://dx.doi.org/10.4028/www.scientific.net/amr.488-489.1323.

Full text
Abstract:
There are geometric distortions in all the images produced by the CCD camera lens, and the distortions are all nonlinear. Coordinate conversion is an effective method for geometric correction. The calculations will boost as the higher degrees of the polynomial. It is impossible to apply directly in real time correction system. This article introduces a correction according to divided areas through linear polynomial on the base of higher degree polynomial coordinate conversions. In the method the image is divided into uneven loop like areas at first. In each area you can take a linear polynomial approximating higher degree polynomial. Because CCD camera lens distortions are radial distortions, high precision can be acquired through the linear polynomial on the base of higher degree polynomial. In the same time the calculations can be decreased and meet the needs of real-time correction. Apply this method on large touch screen, it can correct the distortion effectively and is useful in engineering.
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Shuang, Ming Cai Shan, and Xiang Jie Kong. "Distortion Analysis in the Axial Offset Parallel Video System." Applied Mechanics and Materials 644-650 (September 2014): 1498–501. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.1498.

Full text
Abstract:
This paper discusses the origins of image distortions in the axial offset parallel video system and how to improve viewing comfort. The types and characteristics of image distortions in some stereoscopic video systems are presented first. The geometry equations of stereoscopic camera and display systems are deduced then. The typical distortions in the axial offset parallel video system can be got and the key factors of image distortions can be found out by the analysis on these equations. These distortions include depth non-linearity, puppet theater effect and cardboard effect, shearing distortion. The simulation software is programmed for verification and analysis of above distortions, which are plotted by the variation of system parameters.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, J. Y., and J. S. Yoon. "Image Distortion Compensation by Using a Polynomial Model in an X-Ray Digital Tomosynthesis System." Key Engineering Materials 297-300 (November 2005): 2034–39. http://dx.doi.org/10.4028/www.scientific.net/kem.297-300.2034.

Full text
Abstract:
X-ray technology has been widely used in a number of industrial applications for monitoring and inspecting inner defects which can hardly be found by normal vision systems as a ball grid array (BGA) or a flip chip array (FCA). Digital tomosynthesis (DT) is one of the most useful X-ray cross-sectional imaging methods for PCB inspection, and it usually uses an X-ray image intensifier. However, the image intensifier distorts X-ray images severely both of shape and intensity. This distortion breaks the correspondences between those images and prevents us from acquiring accurate cross-section images. Therefore, image distortion compensation is one of the most important issues in realizing a DT system. In this paper, an image distortion compensation method for an X-ray DT system is presented. It is to use a general distortion polynomial model on two dimensional plane that can cope with arbitrary, complex and various forms of distortion. Experimental results show a great improvement in compensation speed and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Jing, and Guang Xue Chen. "Research on the Geometric Distortion Auto-Correction Algorithm for Image Scanned." Applied Mechanics and Materials 644-650 (September 2014): 4477–81. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4477.

Full text
Abstract:
In the process of gaining digital images whether scanning or photographing, the rugged manuscript or the nonlinear characteristic of the camera lens, etc. may lead to severe geometric distortions of the captured images. Therefore, geometric rectification of the distorted images is necessary before they can be further processed. In this paper, A careful analysis was done on the cases of geometric distortion, and then an auto-correction method is deduced. In the correction process for image scanned, global thresholding was used to extract the feature region which was needed, use Sobel operator to detect edge, then use Hough transform to auto-extract the boundary. After Hough transform, some parameters associated with the control point were obtained for correcting images. Extensive experiments proved that it can recognize the outline fast and correct geometric distortions effectively. The results show that the method was applicable and had high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

Tsukamoto, Naoko, Yoshihiro Sugaya, and Shinichiro Omachi. "Spectrum Correction Using Modeled Panchromatic Image for Pansharpening." Journal of Imaging 6, no. 4 (April 6, 2020): 20. http://dx.doi.org/10.3390/jimaging6040020.

Full text
Abstract:
Pansharpening is a method applied for the generation of high-spatial-resolution multi-spectral (MS) images using panchromatic (PAN) and multi-spectral images. A common challenge in pansharpening is to reduce the spectral distortion caused by increasing the resolution. In this paper, we propose a method for reducing the spectral distortion based on the intensity–hue–saturation (IHS) method targeting satellite images. The IHS method improves the resolution of an RGB image by replacing the intensity of the low-resolution RGB image with that of the high-resolution PAN image. The spectral characteristics of the PAN and MS images are different, and this difference may cause spectral distortion in the pansharpened image. Although many solutions for reducing spectral distortion using a modeled spectrum have been proposed, the quality of the outcomes obtained by these approaches depends on the image dataset. In the proposed technique, we model a low-spatial-resolution PAN image according to a relative spectral response graph, and then the corrected intensity is calculated using the model and the observed dataset. Experiments were conducted on three IKONOS datasets, and the results were evaluated using some major quality metrics. This quantitative evaluation demonstrated the stability of the pansharpened images and the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
31

Svystun, Olesya, Ann Wenzel, Lars Schropp, and Rubens Spin-Neto. "Image-stitching artefacts and distortion in CCD-based cephalograms and their association with sensor type and head movement: ex vivo study." Dentomaxillofacial Radiology 49, no. 3 (March 2020): 20190315. http://dx.doi.org/10.1259/dmfr.20190315.

Full text
Abstract:
Objectives: To assess presence and severity of image-stitching artefacts and distortion in lateral cephalograms acquired by CCD-based sensors and their association with movement. Methods: A human skull was mounted on a robot simulating five head movement types (anteroposterior translation/lifting/nodding/lateral rotation/tremor), at three distances (0.75/1.5/3 mm), based on two patterns (skull returning/not returning to the initial position, except for tremor). Three cephalometric units, two ProMax-2D (Planmeca Oy, Finland), one with Dimax-3 (D-3) and one with Dimax-4 (D-4) sensor, and one Orthophos-SL (ORT, Dentsply-Sirona, Germany), acquired cephalograms during the predetermined movements, in duplicate (54 with movement and 28 controls with no movement per unit). One observer assessed the presence of an image-stitching line (none/thin/thin with vertical stripes or thick), misalignment between the anatomical structure display (none/<1/1–3/>3 mm), and distortion in each image quadrant (present/absent), in duplicate. Severe image-stitching artefacts were defined for images scored with a thin line with vertical stripes or thick line and/or misalignment between anatomical structure display ≥1 mm. Severe distortion was defined for images scored with distortion in both anterior quadrants of the skull. κ-statistics provided intraobserver agreement. Results: Intraobserver reproducibility was >0.8 (all assessed parameters). Severe image-stitching artefacts were scored in 70.4 and 18.5% of D-3 and D-4 movement images, respectively. Severe distortion was scored in 64.8% of D-3, 5.6% of D-4 and 37% of ORT movement images. Neither severe image-stitching artefacts nor severe distortion were observed in control images. Conclusion: Sensor type, movement type, distance and pattern affected presence and severity of image-stitching artefacts and distortion in CCD-based cephalograms.
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Jing, and Xiuqin Su. "Method of Image Quality Improvement for Atmospheric Turbulence Degradation Sequence Based on Graph Laplacian Filter and Nonrigid Registration." Mathematical Problems in Engineering 2018 (July 19, 2018): 1–15. http://dx.doi.org/10.1155/2018/4970907.

Full text
Abstract:
It is challenging to restore a clear image from an atmospheric degraded sequence. The main reason for the image degradation is geometric distortion and blurring caused by turbulence. In this paper, we present a method to eliminate geometric distortion and blur and to recover a single high-quality image from the degraded sequence images. First, we use optical flow technology to register the sequence images, thereby suppressing the geometric deformation of each frame. Next, sequence images are summed by a temporal filter to obtain a single blurred image. Then, the graph Laplacian matrix is used as the cost function to construct the regularization term. The final clear image and point spread function are obtained by iteratively solving the problem. Experiments show that the method can effectively eliminate the distortion and blur, restore the image details, and significantly improve the image quality.
APA, Harvard, Vancouver, ISO, and other styles
33

Brinded, Philip M. J., John A. Bushnell, Jan M. McKenzie, and J. Elisabeth Wells. "Body-image distortion revisited: Temporal instability of body image distortion in anorexia nervosa." International Journal of Eating Disorders 9, no. 6 (November 1990): 695–701. http://dx.doi.org/10.1002/1098-108x(199011)9:6<695::aid-eat2260090612>3.0.co;2-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhao, Yu, Fan Feng Meng, and Jiang Feng. "Unmanned Aerial Vehicle Based Agricultural Remote Sensing Multispectral Image Processing Methods." Advanced Materials Research 905 (April 2014): 585–88. http://dx.doi.org/10.4028/www.scientific.net/amr.905.585.

Full text
Abstract:
In order to provide more flexibility in remote sensing image collection, unmanned aerial vehicle has been used to kinds of agricultural productions. Images acquired from the UAV based RS system were very useful as a result of their high spatial resolution and low turn-around time. This paper discussed general methods to process the multispectral RS data at image process level. The distortion correction caused by sensor was introduced. The geometric distortion comprised sensor distortion and external distortion caused by external parameters. At last, the general image mosaic methods were discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Guo, Yingchun, Gang Yan, Cuihong Xue, and Yang Yu. "Blind Assessment of Wavelet-Compressed Images Based On Subband Statistics of Natural Scenes." International Journal of Advanced Pervasive and Ubiquitous Computing 6, no. 1 (January 2014): 26–43. http://dx.doi.org/10.4018/ijapuc.2014010103.

Full text
Abstract:
This paper presents a no-reference image quality assessment metric that makes use of the wavelet subband statistics to evaluate the levels of distortions of wavelet-compressed images. The work is based on the fact that for distorted images the correlation coefficients of the adjacent scale subbands change proportionally with respect to the distortion of a compressed image. Subband similarity is used in this work to measure the correlations of the adjacent scale subbands of the same wavelet orientations. The higher the image quality is (i.e., less distortion), the greater the cosine similarity coefficient will be. Statistical analysis is applied to analyze the performance of the metric by evaluating the relationship between the human subjective assessment scores and the subband cosine similarities. Experimental results show that the proposed blind method for the quality assessment of wavelet-compressed images has sufficient prediction accuracy (high Pearson Correlation Coefficient, PCCs), sufficient prediction monotonicity (high Spearman Correlation Coefficient SCCs) and sufficient prediction consistency (low outlier ratios) and less running time. It is simple to calculate, has a clear physical meaning, and has a stable performance for the four image databases on which the method was tested.
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, ZunYue, Zhen Luo, SanSan Ao, and YangChuan Cai. "Effect of Laser Welding Parameters on Weld Bowing Distortion of Thin Plates." High Temperature Materials and Processes 37, no. 4 (March 26, 2018): 299–311. http://dx.doi.org/10.1515/htmp-2016-0153.

Full text
Abstract:
AbstractWeld bowing distortions are detrimental to the assembly process, where laser process parameters such as laser power, welding speed, defocusing distance and gas flow rate play a significant role in determining the weld bowing distortion. Herein, weld bowing distortions in 1-mm-thick AA5052 aluminum were measured by the digital image correlation technique following laser welding. Two mathematical response models were developed to predict the laser weld bowing distortion according to the central composite rotatable design method. The optimized process parameters for minimum bowing distortion were obtained, and the influence of the laser process parameters on the weld bowing distortions was found.
APA, Harvard, Vancouver, ISO, and other styles
37

Moon, Cho-I., and Onseok Lee. "Adaptive Fine Distortion Correction Method for Stereo Images of Skin Acquired with a Mobile Phone." Sensors 20, no. 16 (August 11, 2020): 4492. http://dx.doi.org/10.3390/s20164492.

Full text
Abstract:
With the development of the mobile phone, we can acquire high-resolution images of the skin to observe its detailed features using a mobile camera. We acquire stereo images using a mobile camera to enable a three-dimensional (3D) analysis of the skin surface. However, geometric changes in the observed skin structure caused by the lens distortion of the mobile phone result in a low accuracy of the 3D information extracted through stereo matching. Therefore, our study proposes a Distortion Correction Matrix (DCM) to correct the fine distortion of close-up mobile images, pixel by pixel. We verified the correction performance by analyzing the results of correspondence point matching in the stereo image corrected using the DCM. We also confirmed the correction results of the image taken at the five different working distances and derived a linear regression model for the relationship between the angle of the image and the distortion ratio. The proposed DCM considers the distortion degree, which appears to be different in the left and right regions of the image. Finally, we performed a fine distortion correction, which is difficult to check with the naked eye. The results of this study can enable the accurate and precise 3D analysis of the skin surface using corrected mobile images.
APA, Harvard, Vancouver, ISO, and other styles
38

Inan, Yucel. "Assesment of the Image Distortion in Using Various Bit Lengths of Steganographic LSB." ITM Web of Conferences 22 (2018): 01026. http://dx.doi.org/10.1051/itmconf/20182201026.

Full text
Abstract:
Several methods developed and applied for protecting the information. One of these is the stenography. Steganographic techniques are used to transmit the information in the image to the receiver in a secure manner. There are two main principles in the steganographic process. The first one is to hide the message in the image and the second one is to reduce the distortion on the image caused by information hiding. By making changes on digital images, a lot of information can be placed in the image. Nevertheless, changes in the image should not be noticed. In this paper, the effect of using various bit length of the steganographic LSB method on the image distortion is studied. The PSNR, SNR and MSE were used to assess the distortion rates of the images. Histograms were drawn to visualize the differences between original and encoded, “stego-images”.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, M., Y. Zhao, T. Fang, Q. Zhu, S. Yan, and F. Gao. "GEOMETRIC AND NON-LINEAR RADIOMETRIC DISTORTION ROBUST MULTIMODAL IMAGE MATCHING VIA EXPLOITING DEEP FEATURE MAPS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 233–40. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-233-2020.

Full text
Abstract:
Abstract. Image matching is a fundamental issue of multimodal images fusion. Most of recent researches only focus on the non-linear radiometric distortion on coarsely registered multimodal images. The global geometric distortion between images should be eliminated based on prior information (e.g. direct geo-referencing information and ground sample distance) before using these methods to find correspondences. However, the prior information is not always available or accurate enough. In this case, users have to select some ground control points manually to do image registration and make the methods work. Otherwise, these methods will fail. To overcome this problem, we propose a robust deep learning-based multimodal image matching method that can deal with geometric and non-linear radiometric distortion simultaneously by exploiting deep feature maps. It is observed in our study that some of the deep feature maps have similar grayscale distribution and correspondences can be found from these maps using traditional geometric distortion robust matching methods even significant non-linear radiometric difference exists between the original images. Therefore, we can only focus on the geometric distortion when we deal with deep feature maps, and then only focus on non-linear radiometric distortion in patches similarity measurement. The experimental results demonstrate that the proposed method performs better than the state-of-the-art matching methods on multimodal images with both geometric and non-linear radiometric distortion.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Qiu Yun. "Depth Estimation Based Underwater Image Enhancement." Advanced Materials Research 926-930 (May 2014): 1704–7. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.1704.

Full text
Abstract:
According to the image formation model and the nature of underwater images, we find that the effect of the haze and the color distortion seriously pollute the underwater image data, lowing the quality of the underwater images in the visibility and the quality of the data. Hence, aiming to reduce the noise and the haze effect existing in the underwater image and compensate the color distortion, the dark channel prior model is used to enhance the underwater image. We compare the dark channel prior model based image enhancement method to the contrast stretching based method for image enhancement. The experimental results proved that the dark channel prior model has good ability for processing the underwater images. The super performance of the proposed method is demonstrated as well.
APA, Harvard, Vancouver, ISO, and other styles
41

Gao, Farong, Kai Wang, Zhangyi Yang, Yejian Wang, and Qizhong Zhang. "Underwater Image Enhancement Based on Local Contrast Correction and Multi-Scale Fusion." Journal of Marine Science and Engineering 9, no. 2 (February 19, 2021): 225. http://dx.doi.org/10.3390/jmse9020225.

Full text
Abstract:
In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.
APA, Harvard, Vancouver, ISO, and other styles
42

Shen, Peng Fei, Jie Yang, and Yuan Yi Xiong. "Image Defogging Based on Improved Guided Image Filtering." Applied Mechanics and Materials 536-537 (April 2014): 121–26. http://dx.doi.org/10.4028/www.scientific.net/amm.536-537.121.

Full text
Abstract:
In this paper, we analyze the principles of the dark channel prior based on guided filtering image algorithm to defog, pointing out the shortcomings and derive an improved method. Dark channel prior principle is established in the absence of bright areas, which not satisfied Dark channel prior, and thus, the transmittance of the bright areas is estimated error, which will case color distortion of defogged image. By introducing a tolerancemechanism refining the transmittance, the algorithm can effectively handle such problem to overcome the color distortion in bright areas using dark channel prior. Experimental results show that this modification is substantial practicable to restore image, at the same time eliminates the color distortion, significantly improve the visual effect.
APA, Harvard, Vancouver, ISO, and other styles
43

Bao, Jun Wei, Qiang Chen, and Fu Qiang Peng. "Nonlinear Image Mosaic of Pipe Inner Surface." Applied Mechanics and Materials 427-429 (September 2013): 1620–24. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.1620.

Full text
Abstract:
The present study is concerned about image mosaic in single reflector panoramic imaging system (SRPIS). A nonlinear image mosaic algorithm is proposed to get the panoramic image of pipe inner surface. Because of nonlinear distortion in the images which are unwrapped from the original images, its practically impossible for traditional image mosaic method based on 2D planar projective transformation to eliminate phenomenon of ghost and blur in the seam. Nonlinear image mosaic algorithm is performed by projecting many pieces of image divided from right image onto the left image. The position-variant parameters of transformation model are got by quadratic interpolation. The results show that nonlinear image mosaic algorithm overcomes the limitations of traditional image mosaic method in images with distortion and the mosaic image is clearer than that by traditional image mosaic method.
APA, Harvard, Vancouver, ISO, and other styles
44

Siew, Ronian. "Relative illumination and image distortion." Optical Engineering 56, no. 4 (March 31, 2017): 049701. http://dx.doi.org/10.1117/1.oe.56.4.049701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xiang, Tianzhu, Gui-Song Xia, and Liangpei Zhang. "IMAGE STITCHING WITH PERSPECTIVE-PRESERVING WARPING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 287–94. http://dx.doi.org/10.5194/isprsannals-iii-3-287-2016.

Full text
Abstract:
Image stitching algorithms often adopt the global transform, such as homography, and work well for planar scenes or parallax free camera motions. However, these conditions are easily violated in practice. With casual camera motions, variable taken views, large depth change, or complex structures, it is a challenging task for stitching these images. The global transform model often provides dreadful stitching results, such as misalignments or projective distortions, especially perspective distortion. To this end, we suggest a perspective-preserving warping for image stitching, which spatially combines local projective transforms and similarity transform. By weighted combination scheme, our approach gradually extrapolates the local projective transforms of the overlapping regions into the non-overlapping regions, and thus the final warping can smoothly change from projective to similarity. The proposed method can provide satisfactory alignment accuracy as well as reduce the projective distortions and maintain the multi-perspective view. Experimental analysis on a variety of challenging images confirms the efficiency of the approach.
APA, Harvard, Vancouver, ISO, and other styles
46

Xiang, Tianzhu, Gui-Song Xia, and Liangpei Zhang. "IMAGE STITCHING WITH PERSPECTIVE-PRESERVING WARPING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 287–94. http://dx.doi.org/10.5194/isprs-annals-iii-3-287-2016.

Full text
Abstract:
Image stitching algorithms often adopt the global transform, such as homography, and work well for planar scenes or parallax free camera motions. However, these conditions are easily violated in practice. With casual camera motions, variable taken views, large depth change, or complex structures, it is a challenging task for stitching these images. The global transform model often provides dreadful stitching results, such as misalignments or projective distortions, especially perspective distortion. To this end, we suggest a perspective-preserving warping for image stitching, which spatially combines local projective transforms and similarity transform. By weighted combination scheme, our approach gradually extrapolates the local projective transforms of the overlapping regions into the non-overlapping regions, and thus the final warping can smoothly change from projective to similarity. The proposed method can provide satisfactory alignment accuracy as well as reduce the projective distortions and maintain the multi-perspective view. Experimental analysis on a variety of challenging images confirms the efficiency of the approach.
APA, Harvard, Vancouver, ISO, and other styles
47

Gao, Guangyong, Caixue Zhou, and Zongmin Cui. "Reversible Watermarking Using Prediction-Error Expansion and Extreme Learning Machine." Mathematical Problems in Engineering 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/670535.

Full text
Abstract:
Currently, the research for reversible watermarking focuses on the decreasing of image distortion. Aiming at this issue, this paper presents an improvement method to lower the embedding distortion based on the prediction-error expansion (PE) technique. Firstly, the extreme learning machine (ELM) with good generalization ability is utilized to enhance the prediction accuracy for image pixel value during the watermarking embedding, and the lower prediction error results in the reduction of image distortion. Moreover, an optimization operation for strengthening the performance of ELM is taken to further lessen the embedding distortion. With two popular predictors, that is, median edge detector (MED) predictor and gradient-adjusted predictor (GAP), the experimental results for the classical images and Kodak image set indicate that the proposed scheme achieves improvement for the lowering of image distortion compared with the classical PE scheme proposed by Thodi et al. and outperforms the improvement method presented by Coltuc and other existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
48

LIN, YUE-DER, HEN-WEI TSAO, and FOK-CHING CHONG. "AN IMAGE PROCESSING ARCHITECTURE TO ENHANCE IMAGE CONTRAST." Biomedical Engineering: Applications, Basis and Communications 14, no. 05 (October 25, 2002): 215–17. http://dx.doi.org/10.4015/s1016237202000310.

Full text
Abstract:
To have a good image contrast is an important issue in medical images. This paper introduces a feedback-type image processing architecture that can enhance image contrast without further digital image processing technique, e.g. histogram equalization. Compared with the conventional open-loop imaging system, the images derived by the proposed method has a full-range histogram without causing image distortion, and this is difficult to attain for open-loop imaging system.
APA, Harvard, Vancouver, ISO, and other styles
49

Kim, Won Gyum, Yong Seok Seo, Hye Won Jung, Seon Hwa Lee, and Won Geun Oh. "Wavelet Based Multi-Bit Fingerprinting Against Geometric Distortions." Key Engineering Materials 321-323 (October 2006): 1301–5. http://dx.doi.org/10.4028/www.scientific.net/kem.321-323.1301.

Full text
Abstract:
This paper presents a new image fingerprinting scheme which embeds a multi-bits fingerprinting code and is robust against the geometric attack such as rotation, scaling and translation. We construct a 64 bits fingerprinting code and embed into wavelet subband of 512x512 images repeatedly. In order to restore an image from geometric distortion a noise reduction filter is performed and a rectilinear tiling pattern is used as a template. Results of experimental studies show that our method is robust against geometric distortions and JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
50

Yakno, Marlina, Junita Mohamad-Saleh, Mohd Zamri Ibrahim, and W. N. A. W. Samsudin. "Camera-projector calibration for near infrared imaging system." Bulletin of Electrical Engineering and Informatics 9, no. 1 (February 1, 2020): 160–70. http://dx.doi.org/10.11591/eei.v9i1.1697.

Full text
Abstract:
Advanced biomedical engineering technologies are continuously changing the medical practices to improve medical care for patients. Needle insertion navigation during intravenous catheterization process via Near infrared (NIR) and camera-projector is one solution. However, the central point of the problem is the image captured by camera misaligns with the image projected back on the object of interest. This causes the projected image not to be overlaid perfectly in the real-world. In this paper, a camera-projector calibration method is presented. Polynomial algorithm was used to remove the barrel distortion in captured images. Scaling and translation transformations are used to correct the geometric distortions introduced in the image acquisition process. Discrepancies in the captured and projected images are assessed. The accuracy of the image and the projected image is 90.643%. This indicates the feasibility of the captured approach to eliminate discrepancies in the projection and navigation images.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography