Dissertations / Theses on the topic 'Smoothing image'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Smoothing image.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Storve, Sigurd. "Kalman Smoothing Techniques in Medical Image Segmentation." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18823.
Full textJarrett, David Ward 1963. "Digital image noise smoothing using high frequency information." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276599.
Full textHillebrand, Martin. "On robust corner preserving smoothing in image processing." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967514444.
Full textOzmen, Neslihan. "Image Segmentation And Smoothing Via Partial Differential Equations." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610395/index.pdf.
Full textActive Contours (Snakes)&rdquo
model and it is correlated with the Chan-Vese model. In this study, all these approaches have been examined in detail. Mathematical and numerical analysis of these models are studied and some experiments are performed to compare their performance.
Athreya, Jayantha Krishna V. "An analog VLSI architecture for image smoothing and segmentation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0028/MQ39633.pdf.
Full textMORGAN, KEITH PATRICK. "IMPROVED METHODS OF IMAGE SMOOTHING AND RESTORATION (NONSTATIONARY MODELS)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187959.
Full textRamsay, Tim. "A bivariate finite element smoothing spline applied to image registration." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ54429.pdf.
Full textCrespo, José. "Morphological connected filters and intra-region smoothing for image segmentation." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15771.
Full textPérez, Benito Cristina. "Color Image Processing based on Graph Theory." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/123955.
Full text[CAT] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.
[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.
Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955
TESIS
Howell, John R. "Analysis Using Smoothing Via Penalized Splines as Implemented in LME() in R." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1702.pdf.
Full textLapointe, Marc R. "Substitution of the statistical range for the variance in two local noise smoothing algorithms /." Online version of thesis, 1991. http://ritdml.rit.edu/handle/1850/11068.
Full textSchaefer, Charles Robert. "Magnification of bit map images with intelligent smoothing of edges." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9950.
Full textHu, Xin. "An Improved 2D Adaptive Smoothing Algorithm in Image Noise Removal and Feature Preservation." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1235722460.
Full textTeuber, Tanja [Verfasser], and Gabriele [Akademischer Betreuer] Steidl. "Anisotropic Smoothing and Image Restoration Facing Non-Gaussian Noise / Tanja Teuber. Betreuer: Gabriele Steidl." Kaiserslautern : Technische Universität Kaiserslautern, 2012. http://d-nb.info/102759428X/34.
Full textUngan, Cahit Ugur. "Nonlinear Image Restoration." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606796/index.pdf.
Full texta modified version of the Optimum Decoding Based Smoothing Algorithm and the Bootstrap Filter Algorithm which is a version of Particle Filtering methods. A computer software called MATLAB is used for performing the simulations of image estimation. The results of some simulations for various observation and image models are presented.
Altinoklu, Metin Burak. "Image Segmentation Based On Variational Techniques." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610415/index.pdf.
Full text#8211
Shah variational approach have been studied. By obtaining an optimum point of the Mumford-Shah functional which is a piecewise smooth approximate image and a set of edge curves, an image can be decomposed into regions. This piecewise smooth approximate image is smooth inside of regions, but it is allowed to be discontinuous region wise. Unfortunately, because of the irregularity of the Mumford Shah functional, it cannot be directly used for image segmentation. On the other hand, there are several approaches to approximate the Mumford-Shah functional. In the first approach, suggested by Ambrosio-Tortorelli, it is regularized in a special way. The regularized functional (Ambrosio-Tortorelli functional) is supposed to be gamma-convergent to the Mumford-Shah functional. In the second approach, the Mumford-Shah functional is minimized in two steps. In the first minimization step, the edge set is held constant and the resultant functional is minimized. The second minimization step is about updating the edge set by using level set methods. The second approximation to the Mumford-Shah functional is known as the Chan-Vese method. In both approaches, resultant PDE equations (Euler-Lagrange equations of associated functionals) are solved by finite difference methods. In this study, both approaches are implemented in a MATLAB environment. The overall performance of the algorithms has been investigated based on computer simulations over a series of images from simple to complicated.
Parekh, Siddharth Avinash. "A comparison of image processing algorithms for edge detection, corner detection and thinning." University of Western Australia. Centre for Intelligent Information Processing Systems, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0073.
Full textHowlett, John David. "Size Function Based Mesh Relaxation." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd761.pdf.
Full textLétourneau, Étienne. "Impact of algorithm, iterations, post-smoothing, count level and tracer distribution on single-frame positrom emission tomography quantification using a generalized image space reconstruction algorithm." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110750.
Full textLa tomographie par Émission de Positons est une technique d'imagerie médicale traçant les procédures fonctionnelles qui se déroulent dans le patient. L'une des applications courantes de cet appareil consiste à performer un diagnostique subjectif à partir des images obtenues. Cependant, l'imagerie quantitative (IQ) permet de performer une analyse objective en plus de nous procurer de l'information additionnelle telle que la courbe temps-activité (CTA) ainsi que des détails visuels qui échappent à l'œil. Le but de ce travail était, en comparant plusieurs algorithmes de reconstruction tels que le ML-EM PSF, le ISRA PSF et les algorithmes qui en découlent ainsi que la rétroprojection filtrée pour une image bidimensionnelle fixe, de développer une analyse robuste sur les performances quantitatives dépendamment de la localisation des régions d'intérêt (RdI), de leur taille, du niveau de bruit dans l'image, de la distribution de l'activité et des paramètres post-lissage. En simulant des acquisitions à partir d'une coupe axiale d'un cerveau digitale sur Matlab, une comparaison quantitative appuyée de figures qualitative en guise d'outils explicatifs a été effectuée pour toutes les techniques de reconstruction à l'aide de l'Erreur Absolue Moyenne (EAM) et de la relation Biais-Variance. Les résultats obtenus démontrent que la performance de chaque algorithme dépend principalement du nombre d'événements enregistré provenant de la RdI ainsi que de la combinaison itération/post-lissage utilisée qui, lorsque choisie adéquatement, permet à la majorité des algorithmes étudiés de donner des quantités similaires dans la majorité des cas. Parmi les 10 techniques analysées, 3 se sont démarquées : ML-EM PSF, ISRA PSF en utilisant les valeurs prévues avec lissage comme facteur de pondération et RPF avec un post-lissage adéquat les principaux prétendants pour atteindre l'EMA minimale. Mots-clés: Tomographie par émission de positons, Maximum-Likelihood Expectation-Maximization, Image Space Reconstruction Algorithm, Rétroprojection Filtrée, Erreur Absolue Moyenne, Imagerie quantitative.
Rára, Michael. "Numerické metody registrace obrazů s využitím nelineární geometrické transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-401571.
Full textUnnikrishnan, Harikrishnan. "ANALYSIS OF VOCAL FOLD KINEMATICS USING HIGH SPEED VIDEO." UKnowledge, 2016. http://uknowledge.uky.edu/ece_etds/82.
Full textJoginipelly, Arjun. "Implementation of Separable & Steerable Gaussian Smoothers on an FPGA." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/98.
Full textGulo, Carlos Alex Sander Juvêncio [UNESP]. "Técnicas de paralelização em GPGPU aplicadas em algoritmo para remoção de ruído multiplicativo." Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/89336.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
A evolução constante na velocidade de cálculos dos processadores tem sido uma grande aliada no desenvolvimento de áreas da Ciência que exigem processamento de alto desempenho. Associados aos recursos computacionais faz-se necessário o emprego de técnicas de computação paralela no intuito de explorar ao máximo a capacidade de processamento da arquitetura escolhida, bem como, reduzir o tempo de espera no processamento. No entanto, o custo financeiro para aquisição deste tipo dehardwarenão é muito baixo, implicando na busca de alternativas para sua utilização. As arquiteturas de processadores multicoree General Purpose Computing on Graphics Processing Unit(GPGPU), tornam-se opções de baixo custo, pois são projeta-das para oferecer infraestrutura para o processamento de alto desempenho e atender aplicações de tempo real. Com o aperfeiçoamento das tecnologias multicomputador, multiprocessador e GPGPU, a paralelização de técnicas de processamento de imagem tem obtido destaque por vi-abilizar a redução do tempo de processamento de métodos complexos aplicados em imagem de alta resolução. Neste trabalho, é apresentado o estudo e uma abordagem de paralelização em GPGPU, utilizando a arquitetura CUDA, do método de suavização de imagem baseado num modelo variacional, proposto por Jin e Yang (2011), e sua aplicação em imagens com al-tas resoluções. Os resultados obtidos nos experimentos, permitiram obter um speedupde até quinze vezes no tempo de processamento de imagens, comparando o algoritmo sequencial e o algoritmo otimizado paralelizado em CUDA, o que pode viabilizar sua utilização em diversas aplicações de tempo real
Supported by processors evolution, high performance computing have contributed to develop-ment in several scientific research areas which require advanced computations, such as image processing, augmented reality, and others. To fully exploit high performance computing availa-ble in these resources and to decrease processing time, is necessary apply parallel computing. However, those resources are expensive, which implies the search for alternatives ways to use it. The multicore processors architecture andGeneral Purpose Computing on Graphics Proces-sing Unit(GPGPU) become a low cost options, as they were designed to provide infrastructure for high performance computing and attend real-time applications.With the improvements gai-ned in technologies related to multicomputer, multiprocessor and, more recently, to GPGPUs, the parallelization of computational image processing techniques has gained extraordinary pro-minence. This parallelization is crucial for the use of such techniques in applications that have strong demands in terms of processing time, so that even more complex computational algo-rithms can be used, as well as their use on images of higher resolution. In this research, the parallelization in GPGPU of a recent image smoothing method based on a variation model is described and discussed. This method was proposed by Jin and Yang (2011) and is in-demand due to its computation time, and its use with high resolution images. The results obtained are very promising, revealing a speedup about fifteen times in terms of computational speed
Gulo, Carlos Alex Sander Juvêncio. "Técnicas de paralelização em GPGPU aplicadas em algoritmo para remoção de ruído multiplicativo /." São José do Rio Preto : [s.n.], 2012. http://hdl.handle.net/11449/89336.
Full textBanca: José Remo Ferreira Brega
Banca: Edgard A. Lamounier Junior
Resumo: A evolução constante na velocidade de cálculos dos processadores tem sido uma grande aliada no desenvolvimento de áreas da Ciência que exigem processamento de alto desempenho. Associados aos recursos computacionais faz-se necessário o emprego de técnicas de computação paralela no intuito de explorar ao máximo a capacidade de processamento da arquitetura escolhida, bem como, reduzir o tempo de espera no processamento. No entanto, o custo financeiro para aquisição deste tipo dehardwarenão é muito baixo, implicando na busca de alternativas para sua utilização. As arquiteturas de processadores multicoree General Purpose Computing on Graphics Processing Unit(GPGPU), tornam-se opções de baixo custo, pois são projeta-das para oferecer infraestrutura para o processamento de alto desempenho e atender aplicações de tempo real. Com o aperfeiçoamento das tecnologias multicomputador, multiprocessador e GPGPU, a paralelização de técnicas de processamento de imagem tem obtido destaque por vi-abilizar a redução do tempo de processamento de métodos complexos aplicados em imagem de alta resolução. Neste trabalho, é apresentado o estudo e uma abordagem de paralelização em GPGPU, utilizando a arquitetura CUDA, do método de suavização de imagem baseado num modelo variacional, proposto por Jin e Yang (2011), e sua aplicação em imagens com al-tas resoluções. Os resultados obtidos nos experimentos, permitiram obter um speedupde até quinze vezes no tempo de processamento de imagens, comparando o algoritmo sequencial e o algoritmo otimizado paralelizado em CUDA, o que pode viabilizar sua utilização em diversas aplicações de tempo real
Abstract: Supported by processors evolution, high performance computing have contributed to develop-ment in several scientific research areas which require advanced computations, such as image processing, augmented reality, and others. To fully exploit high performance computing availa-ble in these resources and to decrease processing time, is necessary apply parallel computing. However, those resources are expensive, which implies the search for alternatives ways to use it. The multicore processors architecture andGeneral Purpose Computing on Graphics Proces-sing Unit(GPGPU) become a low cost options, as they were designed to provide infrastructure for high performance computing and attend real-time applications.With the improvements gai-ned in technologies related to multicomputer, multiprocessor and, more recently, to GPGPUs, the parallelization of computational image processing techniques has gained extraordinary pro-minence. This parallelization is crucial for the use of such techniques in applications that have strong demands in terms of processing time, so that even more complex computational algo-rithms can be used, as well as their use on images of higher resolution. In this research, the parallelization in GPGPU of a recent image smoothing method based on a variation model is described and discussed. This method was proposed by Jin and Yang (2011) and is in-demand due to its computation time, and its use with high resolution images. The results obtained are very promising, revealing a speedup about fifteen times in terms of computational speed
Mestre
Fiala, Martin. "Hardwarová akcelerace filtrace obrazu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412783.
Full textHessel, Charles. "La décomposition automatique d'une image en base et détail : Application au rehaussement de contraste." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLN017/document.
Full textIn this CIFRE thesis, a collaboration between the Center of Mathematics and their Applications, École Normale Supérieure de Cachan and the company DxO, we tackle the problem of the additive decomposition of an image into base and detail. Such a decomposition is a fundamental tool in image processing. For applications to professional photo editing in DxO Photolab, a core requirement is the absence of artifacts. For instance, in the context of contrast enhancement, in which the base is reduced and the detail increased, minor artifacts becomes highly visible. The distortions thus introduced are unacceptable from the point of view of a photographer.The objective of this thesis is to single out and study the most suitable filters to perform this task, to improve the best ones and to define new ones. This requires a rigorous measure of the quality of the base plus detail decomposition. We examine two classic artifacts (halo and staircasing) and discover three more sorts that are equally crucial: contrast halo, compartmentalization, and the dark halo. This leads us to construct five adapted patterns to measure these artifacts. We end up ranking the optimal filters based on these measurements, and arrive at a clear decision about the best filters. Two filters stand out, including one we propose
Silvestre, André Calheiros. "Estabilização digital em tempo real de imagens em seqüência de vídeos." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-25072007-102338/.
Full textUndesirable shakes or jiggles, object motion within image or desirable motions caused by the camera operator causes image differences in consecutive frames of video sequences. The image stabilization consists of the process of removing inevitable and undesirable fluctuations, shakes and jiggles; with this purpose, nowadays digital processing techniques have been commonly applied in the electronic industry. On the digital processing of image stabilization, computational methods of estimation, smoothing and motion correction are necessary. In the literature various digital processing techniques for image stabilization are described, the most suitable technique should be chosen according to the kind of application. Techniques such as block matching used in motion estimation and low-pass filters used in motion smoothing are found in a great number of papers. This work presents a real time digital image stabilization system capable of stabilizing video sequences with undesirable translational and rotational displacements between frames.
Šmirg, Ondřej. "Tvorba 3D modelu čelistního kloubu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-233691.
Full textHeinrich, André. "Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration." Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108923.
Full textRau, Christian, and rau@maths anu edu au. "Curve Estimation and Signal Discrimination in Spatial Problems." The Australian National University. School of Mathematical Sciences, 2003. http://thesis.anu.edu.au./public/adt-ANU20031215.163519.
Full textAgathangelou, Marios Christaki. "Contour smoothing in segmented images for object-based compression." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286288.
Full textLi, Jian-Cheng. "Generation of simulated ultrasound images using a Gaussian smoothing function." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179261418.
Full textJibai, Nassim. "Multiscale Feature-Preserving Smoothing of Images and Volumes on the GPU." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00748064.
Full textJibai, Nassim. "Multi-scale Feature-Preserving Smoothing of Images and Volumes on GPU." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM025/document.
Full textTwo-dimensional images and three-dimensional volumes have become a staple ingredient of our artistic, cultural, and scientific appetite. Images capture and immortalize an instance such as natural scenes, through a photograph camera. Moreover, they can capture details inside biological subjects through the use of CT (computer tomography) scans, X-Rays, ultrasound, etc. Three-dimensional volumes of objects are also of high interest in medical imaging, engineering, and analyzing cultural heritage. They are produced using tomographic reconstruction, a technique that combine a large series of 2D scans captured from multiple views. Typically, penetrative radiation is used to obtain each 2D scan: X-Rays for CT scans, radio-frequency waves for MRI (magnetic resonance imaging), electron-positron annihilation for PET scans, etc. Unfortunately, their acquisition is influenced by noise caused by different factors. Noise in two-dimensional images could be caused by low-light illumination, electronic defects, low-dose of radiation, and a mispositioning tool or object. Noise in three-dimensional volumes also come from a variety of sources: the limited number of views, lack of captor sensitivity, high contrasts, the reconstruction algorithms, etc. The constraint that data acquisition be noiseless is unrealistic. It is desirable to reduce, or eliminate, noise at the earliest stage in the application. However, removing noise while preserving the sharp features of an image or volume object remains a challenging task. We propose a multi-scale method to smooth 2D images and 3D tomographic data while preserving features at a specified scale. Our algorithm is controlled using a single user parameter – the minimum scale of features to be preserved. Any variation that is smaller than the specified scale is treated as noise and smoothed, while discontinuities such as corners, edges and detail at a larger scale are preserved. We demonstrate that our smoothed data produces clean images and clean contour surfaces of volumes using standard surface-extraction algorithms. In addition to, we compare our results with results of previous approaches. Our method is inspired by anisotropic diffusion. We compute our diffusion tensors from the local continuous histograms of gradients around each pixel in image
Jiang, Long yu. "Séparation et détection des trajets dans un guide d'onde en eau peu profonde." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00765238.
Full textPizarro, Luis [Verfasser], and Joachim [Akademischer Betreuer] Weickert. "Nonlocal smoothing and adaptive morphology for scalar- and matrix-valued images / Luis Pizarro. Betreuer: Joachim Weickert." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/1051279585/34.
Full textLindeberg, Tony. "Discrete Scale-Space Theory and the Scale-Space Primal Sketch." Doctoral thesis, KTH, Numerisk analys och datalogi, NADA, 1991. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-58570.
Full textQC 20120119
Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.
Full text"Edge-enhancing image smoothing." 2011. http://library.cuhk.edu.hk/record=b5894822.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (p. 62-69).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Organization --- p.4
Chapter 2 --- Background and Motivation --- p.7
Chapter 2.1 --- ID Mondrian Smoothing --- p.9
Chapter 2.2 --- 2D Formulation --- p.13
Chapter 3 --- Solver --- p.16
Chapter 3.1 --- More Analysis --- p.20
Chapter 4 --- Edge Extraction --- p.26
Chapter 4.1 --- Related work --- p.26
Chapter 4.2 --- Method and Results --- p.28
Chapter 4.3 --- Summary --- p.32
Chapter 5 --- Image Abstraction and Pencil Sketching --- p.35
Chapter 5.1 --- Related Work --- p.35
Chapter 5.2 --- Method and Results --- p.36
Chapter 5.3 --- Summary --- p.40
Chapter 6 --- Clip-Art Compression Artifact Removal --- p.41
Chapter 6.1 --- Related work --- p.41
Chapter 6.2 --- Method and Results --- p.43
Chapter 6.3 --- Summary --- p.46
Chapter 7 --- Layer-Based Contrast Manipulation --- p.49
Chapter 7.1 --- Related Work --- p.49
Chapter 7.2 --- Method and Results --- p.50
Chapter 7.2.1 --- Edge Adjustment --- p.51
Chapter 7.2.2 --- Detail Magnification --- p.54
Chapter 7.2.3 --- Tone Mapping --- p.55
Chapter 7.3 --- Summary --- p.56
Chapter 8 --- Conclusion and Discussion --- p.59
Bibliography --- p.61
Lumsdaine, A., J. L. Jr Wyatt, and I. M. Elfadel. "Nonlinear Analog Networks for Image Smoothing and Segmentation." 1991. http://hdl.handle.net/1721.1/5983.
Full textChia-Yung, Jui. "The Application of Diffusion Equation in Image Smoothing." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0005-1301200723214300.
Full textLin, Pei-Chien, and 林沛鑑. "A New Edge Smoothing Method for Image Enlargement." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/88959066313828085126.
Full text國立中興大學
資訊科學與工程學系所
99
Image up-scaling is an important part of the digital image processing. Up to now many algorithms have been proposed to enlarge images. The nearest-neighborhood interpolation, the bilinear interpolation, the bicubic interpolation are three important algorithms among them. However, these algorithms still suffer the draw backs of the blurry effect and/or the blocky effect on edge portions of the enlarged image. In this thesis, a new method for enlarging images is proposed. This method tries to reduce the blurry effect and the blocky effect by first using an existing image up-scaling method such as the nearest-neighborhood interpolation or the bilinear interpolation to enlarge the image, then trying to smooth the edge portions of the enlarged image. The Canny edge detector is used to find the edges of the image. Experimental results show that the proposed method has better performance on the edge portions of the enlarged image.
Jui, Chia-Yung, and 芮嘉勇. "The Application of Diffusion Equation in Image Smoothing." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/85134048276396242407.
Full text國立中興大學
應用數學系所
95
In this thesis we present the application of diffusion equation which appears in image smoothing. The nonlinear diffusion equation has been shown that it can smooth the image and retain the edges at the same time. Here, we propose convection-diffusion equation to be an extension of nonlinear equation. A modified finite element scheme will be presented to prevent the numerical oscillation caused by discontinuous solution. Numerical results have shown the advantage of the image smoothing algorithm by using the convection-diffusion equation with modified finite element scheme.
Shr, Yu-Chin, and 史玉琴. "Color Image Compression by Adaptive Region Growing and False Contours Smoothing." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/20889090488658299883.
Full text國立交通大學
資訊科學學系
86
In this thesis, we propose a segmentation-based compression method for color images. First, we split the image by bottom-up quadtree decomposition, and then merge by adaptive region growing. After that, the contours and textures of the segmented regions are encoded. We encode the region texture by polynomial approximation, and encode the region contour using chain code. Finally, because there are some distortion in the reconstructed images, we detect the false contours and apply our smoothing algorithm to get a better image. Experimental results show that most of the complex texture can be reconstructed, and after the postprocessing we can usually have a better image quality.
Hillebrand, Martin [Verfasser]. "On robust corner preserving smoothing in image processing / von Martin Hillebrand." 2003. http://d-nb.info/967514444/34.
Full textBashala, Jenny Mwilambwe. "Development of a new image compression technique using a grid smoothing technique." 2013. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001112.
Full textAims to implement a lossy image compression scheme that uses a graph-based approach. On the one hand, this new method should reach high compression rates with good visual quality, while on the other hand it may lead to the following sub-problems:efficient classification of image data with the use of bilateral mesh filtering ; Transformation of the image into a graph with grid smoothing ; reduction of the graph by means of mesh decimation techniques ; reconstruction process of the reduced graph into an image and quality analysis of the reconstructed images.
Lo, Chung-Ming, and 羅崇銘. "Region-based image retrieval system using perceptual smoothing and region adjacency graph." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/70173868280600288134.
Full text國立中正大學
資訊工程研究所
91
To improve accuracy in content-based image retrieval, region-based retrieval has adopted according to the concept of objects by human. The proposed region-based image retrieval utilizes not only the dominant features of each region but also the correlations between neighboring regions. The improved drainage watershed transformation is used to provide the effective and accurate object segmentation. In order to be more sensitive to color variation, color images are partitioned by extracting features from the HSV color space. Moreover, small details and noise are removed by the perceptual smoothing for reducing oversegmentation. Then, through the region adjacency graph (RAG), each region is characterized by its salient lower-level features and the whole image is presented by the semantic high-level image understanding. In a graph matching procedure, the comparisons between the RAGs of query image and each image of database performs the retrieval. In addition, a simple and efficient algorithm of subgraph isomorphism is adopted to reduce compare time. Experimental results will exhibit and evaluate the performance of the proposed image retrieval system.
Wu, Wei-Chen, and 吳維宸. "Edge Curve Scaling and Smoothing with Cubic Spline Interpolation for Image Up-Scaling." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/00072826594831767376.
Full text國立清華大學
資訊工程學系
101
Image up-scaling is an important technique to increase the resolution of an image. While earlier interpolation based approaches such as the bilinear and the bicubic method cause blurring and ringing artifacts in edge regions of the up-scaled image due to the loss of high frequency details. Recent approaches such as the local-self example super resolution can achieve very promising up-scaling results while their computation cost are high because they recover high frequency components of the whole image. In this paper, we proposed an image up-scaling method via an up-scaled edge map. By predicting edge regions of the up-scaled image, we recover high frequency components of edge regions of the up-scaled image to improve the sharpness and reduce ringing artifacts. We propose an edge curve scaling method with cubic spline interpolation to up-scale an edge map. If an edge curve is directly applied to the cubic spline interpolation function for edge curve up-scaling , the edge curve scaling results have zigzag artifacts. We also propose a simple smoothing function to avoid the zigzag problems and maintain the contour shape of images. Our methods can reduce execution time by 90% because we only perform high frequency components recovery on edge regions while other methods adopt to recover the high frequency components of every points in the up-scaled image. Experimental results show that we can achieve similar performances with the local self example super resolution method.
Gao, Song. "A new image segmentation and smoothing method based on the Mumford-Shah variational model." Thesis, 2003. http://spectrum.library.concordia.ca/2395/1/MQ91033.pdf.
Full textPao, Shun-An, and 包順安. "Using Image Smoothing and Feature Extraction to Improve the Classification Accuracy of Hyperspectral Imagery." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/08455857999516805429.
Full text國立成功大學
測量工程學系
87
Multispectral sensors have been widely used to observe Earth surface since the 1960’s. However, traditional sensors were limited, due to the inadequate sensor technology, to collect spectral data less than 20 bands. In recent years, spectral image sensors have been improved to have the ability in collecting spectral data in several hundred bands, which are called hyperspectral image scanners. For example, the AVIRIS scanners developed by JPL of NASA provide 224 contiguous spectral channels. Theoretically, using hyperspectral images should increase our abilities in classifying land use/cover types. However, the data classification approach that has been successfully applied to multispectral data in the past is not as effective for hyperspectral data as well. As the dimensionality increases, the number of training samples needed to characterize the classes increases as well. If the number of training samples available fails to catch up with the need, which is the case for hyperspectral data, parameter estimation becomes inaccurate. The classification accuracy first grows and then declines as the number of spectral band increases, which is often referred to as the Hughes phenomenon. Generally speaking, classification performance depends on four factors: class separability, the training sample size, dimensionality, and classifier type. To improve classification performance, attention is often focused on seeking improvements on the factors other than class separability because class separability is usually considered inherent and predetermined. The objective of this paper is to call attention to the fact that class separability can be increased. The lowpass filter is proposed as a means for increasing class separability if a data set consists of multi-pixel objects. By employing feature extraction, the number of features can be reduced substantially without sacrificing significant information. This thesis review some feature extraction methods have been developed to speed up the process and increase the precision of classification. For the classifier type, we use Gaussian Maximum Likelihood classifier. Our experiments show that when the number of training samples is relatively small compared to the dimensionality, maximum likelihood estimates of parameters have large variances, leading to a large classification error. The lowpass spatial filter can increase class separability and classification accuracy. By employing feature extraction, when the ratio of the training sample size to dimensionality is 4 obtained good results. Feature extraction associate with the lowpass spatial filter can increase classification accuracy more