Academic literature on the topic 'Smoothing image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Smoothing image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Smoothing image"

1

BASU, MITRA, and MIN SU. "IMAGE SMOOTHING WITH EXPONENTIAL FUNCTIONS." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 04 (June 2001): 735–52. http://dx.doi.org/10.1142/s0218001401001076.

Full text
Abstract:
Noise reduction in images, also known as image smoothing, is an essential and first step before further processings of the image. The key to image smoothing is to preserve important features while removing noise from the image. Gaussian function is widely used in image smoothing. Recently it has been reported that exponential functions (value of the exponent is not equal to 2) perform substantially better than Gaussian functions in modeling and preserving image features. In this paper we propose a family of exponential functions, that include Gaussian when the value of the exponent is 2, for image smoothing. We experiment with a variety of images, artificial and real, and demonstrate that optimal results are obtained when the value of the exponent is within a certain range.
APA, Harvard, Vancouver, ISO, and other styles
2

Sirur, Kedir Kamu, Ye Peng, and Qinchuan Zhang. "Smoothing Filters for Waveform Image Segmentation." International Journal of Machine Learning and Computing 7, no. 5 (October 2017): 139–43. http://dx.doi.org/10.18178/ijmlc.2017.7.5.636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pizarro, Luis, Pavel Mrázek, Stephan Didas, Sven Grewenig, and Joachim Weickert. "Generalised Nonlocal Image Smoothing." International Journal of Computer Vision 90, no. 1 (April 9, 2010): 62–87. http://dx.doi.org/10.1007/s11263-010-0337-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meer, P., R. H. Park, and K. J. Cho. "Multiresolution Adaptive Image Smoothing." CVGIP: Graphical Models and Image Processing 56, no. 2 (March 1994): 140–48. http://dx.doi.org/10.1006/cgip.1994.1013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Hui-hong, and Dong-yuan Ge. "A novel image edge smoothing method based on convolutional neural network." International Journal of Advanced Robotic Systems 17, no. 3 (May 1, 2020): 172988142092167. http://dx.doi.org/10.1177/1729881420921676.

Full text
Abstract:
In the field of visual perception, the edges of images tend to be rich in effective visual stimuli, which contribute to the neural network’s understanding of various scenes. Image smoothing is an image processing method used to highlight the wide area, low-frequency components, main part of the image or to suppress image noise and high-frequency interference components, which could make the image’s brightness smooth and gradual, reduce the abrupt gradient, and improve the image quality. At present, there are still problems such as easy blurring of the edges of the image, poor overall smoothing effect, obvious step effect, and lack of robustness to noise on image smoothing. Based on the convolutional neural network, this article proposes a method for edge detection and deep learning for image smoothing. The results show that the research method proposed in this article solves the problem of edge detection and information capture better, significantly improves the edge effect, and protects the effectiveness of edge information. At the same time, it reduces the signal-to-noise ratio of the smoothed image and greatly improves the effect of image smoothing.
APA, Harvard, Vancouver, ISO, and other styles
6

Peng, Anjie, Gao Yu, Yadong Wu, Qiong Zhang, and Xiangui Kang. "A Universal Image Forensics of Smoothing Filtering." International Journal of Digital Crime and Forensics 11, no. 1 (January 2019): 18–28. http://dx.doi.org/10.4018/ijdcf.2019010102.

Full text
Abstract:
Digital image smoothing filtering operations, including the average filtering, Gaussian filtering and median filtering are always used to beautify the forged images. The detection of these smoothing operations is important in the image forensics field. In this article, the authors propose a universal detection algorithm which can simultaneously detect the average filtering, Gaussian low-pass filtering and median filtering. Firstly, the high-frequency residuals are used as being the feature extraction domain, and then the feature extraction is established on the local binary pattern (LBP) and the autoregressive model (AR). For the LBP model, the authors exploit that both of the relationships between the central pixel and its neighboring pixels and the relationships among the neighboring pixels are differentiated for the original images and smoothing filtered images. A method is further developed to reduce the high dimensionality of LBP-based features. Experimental results show that the proposed detector is effective in the smoothing forensics, and achieves better performance than the previous works, especially on the JPEG images.
APA, Harvard, Vancouver, ISO, and other styles
7

Et. al., Ch Kavya ,. "Performance Analysis of Different Filters for Digital Image Processing." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 10, 2021): 2572–76. http://dx.doi.org/10.17762/turcomat.v12i2.2220.

Full text
Abstract:
Digital image processing is one of the drastically growing areas used in various real- time industries like medical, satellite, remote sensing, and pattern recognition. The output of the image processing depends on the quality of the image. Filters are used to modify the images, such as removing the noise and smoothing the images. It is essential to suppress the high- frequency values in the image for smoothening and improving the low-frequency values to enhance the image of strengthening else it doesn't provide good output. This paper discussed various filters and their functionalities concerning digital image processing. Here linear, as well as non-linear filters, are presented. It is easy to decide about the better filter for improving the image processing output from the discussion.
APA, Harvard, Vancouver, ISO, and other styles
8

HE, QINBIN, and FANGYUE CHEN. "DESIGNING CNN GENES FOR BINARY IMAGE EDGE SMOOTHING AND NOISE REMOVING." International Journal of Bifurcation and Chaos 16, no. 10 (October 2006): 3007–13. http://dx.doi.org/10.1142/s0218127406016604.

Full text
Abstract:
Edge smoothing and noise removing for images are a common method of image processing. By designing CNN genes, edges can be smoothed and particles can be removed from a binary image. However, a satisfying result cannot be obtained by choosing only one CNN gene. In this paper, a group of edge smoothing and noise removing CNN genes is proposed as a synthetic disposal for a binary image. Disposed by the group of CNN genes, the characteristics of the original image can be preserved as much as possible. Two examples of edge smoothing and noise removing for a binary image are well illustrated by this method in this paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Xiang, Xuemei Li, Yuanfeng Zhou, and Caiming Zhang. "Image smoothing based on global sparsity decomposition and a variable parameter." Computational Visual Media 7, no. 4 (May 17, 2021): 483–97. http://dx.doi.org/10.1007/s41095-021-0220-1.

Full text
Abstract:
AbstractSmoothing images, especially with rich texture, is an important problem in computer vision. Obtaining an ideal result is difficult due to complexity, irregularity, and anisotropicity of the texture. Besides, some properties are shared by the texture and the structure in an image. It is a hard compromise to retain structure and simultaneously remove texture. To create an ideal algorithm for image smoothing, we face three problems. For images with rich textures, the smoothing effect should be enhanced. We should overcome inconsistency of smoothing results in different parts of the image. It is necessary to create a method to evaluate the smoothing effect. We apply texture pre-removal based on global sparse decomposition with a variable smoothing parameter to solve the first two problems. A parametric surface constructed by an improved Bessel method is used to determine the smoothing parameter. Three evaluation measures: edge integrity rate, texture removal rate, and gradient value distribution are proposed to cope with the third problem. We use the alternating direction method of multipliers to complete the whole algorithm and obtain the results. Experiments show that our algorithm is better than existing algorithms both visually and quantitatively. We also demonstrate our method’s ability in other applications such as clip-art compression artifact removal and content-aware image manipulation.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Xiaohua, Yuelan Xin, and Ning Xie. "Anisotropic Joint Trilateral Rolling Filter for Image Smoothing." Journal of the Institute of Industrial Applications Engineers 7, no. 3 (July 25, 2019): 91–98. http://dx.doi.org/10.12792/jiiae.7.91.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Smoothing image"

1

Storve, Sigurd. "Kalman Smoothing Techniques in Medical Image Segmentation." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18823.

Full text
Abstract:
An existing C++ library for efficient segmentation of ultrasound recordings by means of Kalman filtering, the real-time contour tracking library (RCTL), is used as a building block to implement and assess the performance of different Kalman smoothing techniques: fixed-point, fixed-lag, and fixed-interval smoothing. An experimental smoothing technique based on fusion of tracking results and learned mean state estimates at different positions in the heart-cycle is proposed. A set of $29$ recordings with ground-truth left ventricle segmentations provided by a trained medical doctor is used for the performance evaluation.The clinical motivation is to improve the accuracy of automatic left-ventricle tracking, which can be applied to improve the automatic measurement of clinically important parameters such as the ejection fraction. The evaluation shows that none of the smoothing techniques offer significant improvements over regular Kalman filtering. For the Kalman smoothing algorithms, it is argued to be a consequence of the way edge-detection measurements are performed internally in the library. The statistical smoother's lack of improvement is explained by too large interpersonal variations; the mean left-ventricular deformation pattern does not generalize well to individual cases.
APA, Harvard, Vancouver, ISO, and other styles
2

Jarrett, David Ward 1963. "Digital image noise smoothing using high frequency information." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276599.

Full text
Abstract:
The goal of digital image noise smoothing is to smooth noise in the image without smoothing edges and other high frequency information. Statistically optimal methods must use accurate statistical models of the image and noise. Subjective methods must also characterize the image. Two methods using high frequency information to augment existing noise smoothing methods are investigated: two component model (TCM) smoothing and second derivative enhancement (SDE) smoothing. TCM smoothing applies an optimal noise smoothing filter to a high frequency residual, extracted from the noisy image using a two component source model. The lower variance and increased stationarity of the residual compared to the original image increases this filters effectiveness. SDE smoothing enhances the edges of the low pass filtered noisy image with the second derivative, extracted from the noisy image. Both methods are shown to perform better than the methods they augment, through objective (statistical) and subjective (visual) comparisons.
APA, Harvard, Vancouver, ISO, and other styles
3

Hillebrand, Martin. "On robust corner preserving smoothing in image processing." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967514444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ozmen, Neslihan. "Image Segmentation And Smoothing Via Partial Differential Equations." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610395/index.pdf.

Full text
Abstract:
In image processing, partial differential equation (PDE) based approaches have been extensively used in segmentation and smoothing applications. The Perona-Malik nonlinear diffusion model is the first PDE based method used in the image smoothing tasks. Afterwards the classical Mumford-Shah model was developed to solve both image segmentation and smoothing problems and it is based on the minimization of an energy functional. It has numerous application areas such as edge detection, motion analysis, medical imagery, object tracking etc. The model is a way of finding a partition of an image by using a piecewise smooth representation of the image. Unfortunately numerical procedures for minimizing the Mumford-Shah functional have some difficulties because the problem is non convex and it has numerous local minima, so approximate approaches have been proposed. Two such methods are the Ambrosio-Tortorelli approximation and the Chan-Vese active contour method. Ambrosio and Tortorelli have developed a practical numerical implementation of the Mumford-Shah model which based on an elliptic approximation of the original functional. The Chan-Vese model is a piecewise constant generalization of the Mumford-Shah functional and it is based on level set formulation. Another widely used image segmentation technique is the &ldquo
Active Contours (Snakes)&rdquo
model and it is correlated with the Chan-Vese model. In this study, all these approaches have been examined in detail. Mathematical and numerical analysis of these models are studied and some experiments are performed to compare their performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Athreya, Jayantha Krishna V. "An analog VLSI architecture for image smoothing and segmentation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0028/MQ39633.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

MORGAN, KEITH PATRICK. "IMPROVED METHODS OF IMAGE SMOOTHING AND RESTORATION (NONSTATIONARY MODELS)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187959.

Full text
Abstract:
The problems of noise removal, and simultaneous noise removal and deblurring of imagery are common to many areas of science. An approach which allows for the unified treatment of both problems involves modeling imagery as a sample of a random process. Various nonstationary image models are explored in this context. Attention is directed to identifying the model parameters from imagery which has been corrupted by noise and possibly blur, and the use of the model to form an optimal reconstruction of the image. Throughout the work, emphasis is placed on both theoretical development and practical considerations involved in achieving this reconstruction. The results indicate that the use of nonstationary image models offers considerable improvement over traditional techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Ramsay, Tim. "A bivariate finite element smoothing spline applied to image registration." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ54429.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Crespo, José. "Morphological connected filters and intra-region smoothing for image segmentation." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pérez, Benito Cristina. "Color Image Processing based on Graph Theory." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/123955.

Full text
Abstract:
[ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.
[CAT] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.
[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.
Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955
TESIS
APA, Harvard, Vancouver, ISO, and other styles
10

Howell, John R. "Analysis Using Smoothing Via Penalized Splines as Implemented in LME() in R." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1702.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Smoothing image"

1

Geological Survey (U.S.), ed. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Geological Survey (U.S.), ed. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Geological Survey (U.S.), ed. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geological Survey (U.S.), ed. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Weinert, Howard L. Fast Compact Algorithms and Software for Spline Smoothing. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Butz, Martin V., and Esther F. Kutter. Primary Visual Perception from the Bottom Up. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780198739692.003.0008.

Full text
Abstract:
This chapter addresses primary visual perception, detailing how visual information comes about and, as a consequence, which visual properties provide particularly useful information about the environment. The brain extracts this information systematically, and also separates redundant and complementary visual information aspects to improve the effectiveness of visual processing. Computationally, image smoothing, edge detectors, and motion detectors must be at work. These need to be applied in a convolutional manner over the fixated area, which are computations that are predestined to be solved by means of cortical columnar structures in the brain. On the next level, the extracted information needs to be integrated to be able to segment and detect object structures. The brain solves this highly challenging problem by incorporating top-down expectations and by integrating complementary visual information aspects, such as light reflections, texture information, line convergence information, shadows, and depth information. In conclusion, the need for integrating top-down visual expectations to form complete and stable perceptions is made explicit.
APA, Harvard, Vancouver, ISO, and other styles
9

Rose, Molly. Notebook: Negative Image Convenient Composition Book for Kiwi Fruit Smoothie Fans. Independently Published, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Smoothies, Juices and Blended Drinks: 75 Fabulous, Energizing Drinks, with over 200 Images. Anness Publishing, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Smoothing image"

1

Abidi, Mongi A., Andrei V. Gribok, and Joonki Paik. "Regularized 3D Image Smoothing." In Advances in Computer Vision and Pattern Recognition, 197–218. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46364-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Burger, Wilhelm, and Mark J. Burge. "Edge-Preserving Smoothing Filters." In Principles of Digital Image Processing, 119–67. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-84882-919-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Noel, Guillaume, Karim Djouani, and Yskandar Hamam. "Grid Smoothing for Image Enhancement." In Future Generation Information Technology, 125–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17569-5_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yuying, Yonggang Huang, Jun Zhang, Xu Liu, and Hualei Shen. "Noisy Smoothing Image Source Identification." In Cyberspace Safety and Security, 135–47. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69471-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Karlsson, Anders, Jon Bjärkefur, Joakim Rydell, and Christina Grönwall. "Smoothing-Based Submap Merging in Large Area SLAM." In Image Analysis, 134–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21227-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Paulus, Dietrich W. R., and Joachim Hornegger. "Filtering and Smoothing Signals." In Pattern Recognition and Image Processing in C++, 263–77. Wiesbaden: Vieweg+Teubner Verlag, 1995. http://dx.doi.org/10.1007/978-3-322-87867-0_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mahmoodi, Sasan, and Steve Gunn. "Scale Space Smoothing, Image Feature Extraction and Bessel Filters." In Image Analysis, 625–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21227-7_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Xin, Hui Peng, Joseph Kesker, Xiang Cai, William G. Wee, and Jing-Huei Lee. "An Improved Adaptive Smoothing Method." In Image Analysis and Processing – ICIAP 2009, 757–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04146-4_81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Averbuch, Amir Z., Pekka Neittaanmaki, and Valery A. Zheludev. "Polynomial Smoothing Splines." In Spline and Spline Wavelet Methods with Applications to Signal and Image Processing, 59–68. Dordrecht: Springer Netherlands, 2014. http://dx.doi.org/10.1007/978-94-017-8926-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Noel, Guillaume, Karim Djouani, and Yskandar Hamam. "Optimisation-Based Image Grid Smoothing for SST Images." In Advanced Concepts for Intelligent Vision Systems, 227–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17691-3_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Smoothing image"

1

Xu, Li, Cewu Lu, Yi Xu, and Jiaya Jia. "Image smoothing viaL0gradient minimization." In the 2011 SIGGRAPH Asia Conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2024156.2024208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alsam, Ali, and Hans Jakob Rivertz. "Fast Edge Preserving Smoothing Algorithm." In Signal and Image Processing. Calgary,AB,Canada: ACTAPRESS, 2011. http://dx.doi.org/10.2316/p.2011.759-025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alsam, Ali, and Hans Jakob Rivertz. "Fast Edge Preserving Smoothing Algorithm." In Signal and Image Processing. Calgary,AB,Canada: ACTAPRESS, 2012. http://dx.doi.org/10.2316/p.2012.759-025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang and Mitra. "Image smoothing based on local image models." In IEEE International Conference on Systems Engineering. IEEE, 1989. http://dx.doi.org/10.1109/icsyse.1989.48625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Xin, Rui-guang Wang, and Xi-feng Zheng. "Application of image smoothing algorithm." In 2010 3rd International Congress on Image and Signal Processing (CISP). IEEE, 2010. http://dx.doi.org/10.1109/cisp.2010.5647444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Letham, Jonathan, Neil M. Robertson, and Barry Connor. "Contextual smoothing of image segmentation." In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2010. http://dx.doi.org/10.1109/cvprw.2010.5543910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gustafson, Steven C., Vasiliki E. Nikolaou, and Farid Ahmed. "Image smoothing with minimal distortion." In SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics, edited by Friedrich O. Huck and Richard D. Juday. SPIE, 1995. http://dx.doi.org/10.1117/12.211975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Liang, Xiaojie Guo, Wei Feng, and Jiawan Zhang. "Soft Clustering Guided Image Smoothing." In 2018 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2018. http://dx.doi.org/10.1109/icme.2018.8486448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Taguchi, Akira, Hironori Takashima, and Yutaka Murata. "Fuzzy filters for image smoothing." In IS&T/SPIE 1994 International Symposium on Electronic Imaging: Science and Technology, edited by Edward R. Dougherty, Jaakko Astola, and Harold G. Longbotham. SPIE, 1994. http://dx.doi.org/10.1117/12.172570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alvarez, Luis, and Julio Esclarin. "Image quantization by nonlinear smoothing." In SPIE's 1995 International Symposium on Optical Science, Engineering, and Instrumentation, edited by Leonid I. Rudin and Simon K. Bramble. SPIE, 1995. http://dx.doi.org/10.1117/12.218473.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Smoothing image"

1

Weiss, Isaac. Image Smoothing and Differentiation with Minimal-Curvature Filters. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada215184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Jingyue, and Bradley J. Lucier. Error Bounds for Finite-Difference Methods for Rudin-Osher-Fatemi Image Smoothing. Fort Belvoir, VA: Defense Technical Information Center, September 2009. http://dx.doi.org/10.21236/ada513262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography