Дисертації з теми "Smoothing image"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Smoothing image.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Smoothing image".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Storve, Sigurd. "Kalman Smoothing Techniques in Medical Image Segmentation." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18823.

Повний текст джерела
Анотація:
An existing C++ library for efficient segmentation of ultrasound recordings by means of Kalman filtering, the real-time contour tracking library (RCTL), is used as a building block to implement and assess the performance of different Kalman smoothing techniques: fixed-point, fixed-lag, and fixed-interval smoothing. An experimental smoothing technique based on fusion of tracking results and learned mean state estimates at different positions in the heart-cycle is proposed. A set of $29$ recordings with ground-truth left ventricle segmentations provided by a trained medical doctor is used for the performance evaluation.The clinical motivation is to improve the accuracy of automatic left-ventricle tracking, which can be applied to improve the automatic measurement of clinically important parameters such as the ejection fraction. The evaluation shows that none of the smoothing techniques offer significant improvements over regular Kalman filtering. For the Kalman smoothing algorithms, it is argued to be a consequence of the way edge-detection measurements are performed internally in the library. The statistical smoother's lack of improvement is explained by too large interpersonal variations; the mean left-ventricular deformation pattern does not generalize well to individual cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jarrett, David Ward 1963. "Digital image noise smoothing using high frequency information." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276599.

Повний текст джерела
Анотація:
The goal of digital image noise smoothing is to smooth noise in the image without smoothing edges and other high frequency information. Statistically optimal methods must use accurate statistical models of the image and noise. Subjective methods must also characterize the image. Two methods using high frequency information to augment existing noise smoothing methods are investigated: two component model (TCM) smoothing and second derivative enhancement (SDE) smoothing. TCM smoothing applies an optimal noise smoothing filter to a high frequency residual, extracted from the noisy image using a two component source model. The lower variance and increased stationarity of the residual compared to the original image increases this filters effectiveness. SDE smoothing enhances the edges of the low pass filtered noisy image with the second derivative, extracted from the noisy image. Both methods are shown to perform better than the methods they augment, through objective (statistical) and subjective (visual) comparisons.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hillebrand, Martin. "On robust corner preserving smoothing in image processing." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967514444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ozmen, Neslihan. "Image Segmentation And Smoothing Via Partial Differential Equations." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610395/index.pdf.

Повний текст джерела
Анотація:
In image processing, partial differential equation (PDE) based approaches have been extensively used in segmentation and smoothing applications. The Perona-Malik nonlinear diffusion model is the first PDE based method used in the image smoothing tasks. Afterwards the classical Mumford-Shah model was developed to solve both image segmentation and smoothing problems and it is based on the minimization of an energy functional. It has numerous application areas such as edge detection, motion analysis, medical imagery, object tracking etc. The model is a way of finding a partition of an image by using a piecewise smooth representation of the image. Unfortunately numerical procedures for minimizing the Mumford-Shah functional have some difficulties because the problem is non convex and it has numerous local minima, so approximate approaches have been proposed. Two such methods are the Ambrosio-Tortorelli approximation and the Chan-Vese active contour method. Ambrosio and Tortorelli have developed a practical numerical implementation of the Mumford-Shah model which based on an elliptic approximation of the original functional. The Chan-Vese model is a piecewise constant generalization of the Mumford-Shah functional and it is based on level set formulation. Another widely used image segmentation technique is the &ldquo
Active Contours (Snakes)&rdquo
model and it is correlated with the Chan-Vese model. In this study, all these approaches have been examined in detail. Mathematical and numerical analysis of these models are studied and some experiments are performed to compare their performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Athreya, Jayantha Krishna V. "An analog VLSI architecture for image smoothing and segmentation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0028/MQ39633.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

MORGAN, KEITH PATRICK. "IMPROVED METHODS OF IMAGE SMOOTHING AND RESTORATION (NONSTATIONARY MODELS)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187959.

Повний текст джерела
Анотація:
The problems of noise removal, and simultaneous noise removal and deblurring of imagery are common to many areas of science. An approach which allows for the unified treatment of both problems involves modeling imagery as a sample of a random process. Various nonstationary image models are explored in this context. Attention is directed to identifying the model parameters from imagery which has been corrupted by noise and possibly blur, and the use of the model to form an optimal reconstruction of the image. Throughout the work, emphasis is placed on both theoretical development and practical considerations involved in achieving this reconstruction. The results indicate that the use of nonstationary image models offers considerable improvement over traditional techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ramsay, Tim. "A bivariate finite element smoothing spline applied to image registration." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ54429.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Crespo, José. "Morphological connected filters and intra-region smoothing for image segmentation." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15771.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pérez, Benito Cristina. "Color Image Processing based on Graph Theory." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/123955.

Повний текст джерела
Анотація:
[ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.
[CAT] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.
[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.
Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Howell, John R. "Analysis Using Smoothing Via Penalized Splines as Implemented in LME() in R." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1702.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Lapointe, Marc R. "Substitution of the statistical range for the variance in two local noise smoothing algorithms /." Online version of thesis, 1991. http://ritdml.rit.edu/handle/1850/11068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Schaefer, Charles Robert. "Magnification of bit map images with intelligent smoothing of edges." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9950.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Hu, Xin. "An Improved 2D Adaptive Smoothing Algorithm in Image Noise Removal and Feature Preservation." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1235722460.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Teuber, Tanja [Verfasser], and Gabriele [Akademischer Betreuer] Steidl. "Anisotropic Smoothing and Image Restoration Facing Non-Gaussian Noise / Tanja Teuber. Betreuer: Gabriele Steidl." Kaiserslautern : Technische Universität Kaiserslautern, 2012. http://d-nb.info/102759428X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ungan, Cahit Ugur. "Nonlinear Image Restoration." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606796/index.pdf.

Повний текст джерела
Анотація:
This thesis analyzes the process of deblurring of degraded images generated by space-variant nonlinear image systems with Gaussian observation noise. The restoration of blurred images is performed by using two methods
a modified version of the Optimum Decoding Based Smoothing Algorithm and the Bootstrap Filter Algorithm which is a version of Particle Filtering methods. A computer software called MATLAB is used for performing the simulations of image estimation. The results of some simulations for various observation and image models are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Altinoklu, Metin Burak. "Image Segmentation Based On Variational Techniques." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610415/index.pdf.

Повний текст джерела
Анотація:
In this thesis, the image segmentation methods based on the Mumford&
#8211
Shah variational approach have been studied. By obtaining an optimum point of the Mumford-Shah functional which is a piecewise smooth approximate image and a set of edge curves, an image can be decomposed into regions. This piecewise smooth approximate image is smooth inside of regions, but it is allowed to be discontinuous region wise. Unfortunately, because of the irregularity of the Mumford Shah functional, it cannot be directly used for image segmentation. On the other hand, there are several approaches to approximate the Mumford-Shah functional. In the first approach, suggested by Ambrosio-Tortorelli, it is regularized in a special way. The regularized functional (Ambrosio-Tortorelli functional) is supposed to be gamma-convergent to the Mumford-Shah functional. In the second approach, the Mumford-Shah functional is minimized in two steps. In the first minimization step, the edge set is held constant and the resultant functional is minimized. The second minimization step is about updating the edge set by using level set methods. The second approximation to the Mumford-Shah functional is known as the Chan-Vese method. In both approaches, resultant PDE equations (Euler-Lagrange equations of associated functionals) are solved by finite difference methods. In this study, both approaches are implemented in a MATLAB environment. The overall performance of the algorithms has been investigated based on computer simulations over a series of images from simple to complicated.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Parekh, Siddharth Avinash. "A comparison of image processing algorithms for edge detection, corner detection and thinning." University of Western Australia. Centre for Intelligent Information Processing Systems, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0073.

Повний текст джерела
Анотація:
Image processing plays a key role in vision systems. Its function is to extract and enhance pertinent information from raw data. In robotics, processing of real-time data is constrained by limited resources. Thus, it is important to understand and analyse image processing algorithms for accuracy, speed, and quality. The theme of this thesis is an implementation and comparative study of algorithms related to various image processing techniques like edge detection, corner detection and thinning. A re-interpretation of a standard technique, non-maxima suppression for corner detectors was attempted. In addition, a thinning filter, Hall-Guo, was modified to achieve better results. Generally, real time data is corrupted with noise. This thesis also incorporates few smoothing filters that help in noise reduction. Apart from comparing and analysing algorithms for these techniques, an attempt was made to implement correlation-based optic flow
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Howlett, John David. "Size Function Based Mesh Relaxation." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd761.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Létourneau, Étienne. "Impact of algorithm, iterations, post-smoothing, count level and tracer distribution on single-frame positrom emission tomography quantification using a generalized image space reconstruction algorithm." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110750.

Повний текст джерела
Анотація:
Positron Emission Tomography (PET) is a medical imaging technique tracing the functional processes inside a subject. One of the common applications of this device is to perform a subjective diagnostic from the images. However, quantitative imaging (QI) allows one to perform an objective analysis as well as providing extra information such as the time activity curves (TAC) and visual details that the eye can't see. The aim of this work was to, by comparing several reconstruction algorithms such as the MLEM PSF, ISRA PSF and its related algorithms and FBP for single-frame imaging, to develop a robust analysis on the quantitative performance depending on the region of interest (ROI), the size of the ROI, the noise level, the activity distribution and the post-smoothing parameters. By simulating an acquisition using a 2-D digital axial brain phantom on Matlab, comparison has been done on a quantitative point of view helped by visual figures as explanatory tools for all the techniques using the Mean Absolute Error (MAE) and the Bias-Variance relation. Results show that the performance of each algorithm depends mainly on the number of counts coming from the ROI and the iteration/post-smoothing combination that, when adequately chosen, allows nearly every algorithms to give similar quantitative results in most cases. Among the 10 analysed techniques, 3 distinguished themselves: ML-EM PSF, ISRA PSF with the smoothed expected data as weight and the FBP with an adequate post-smoothing were the main contenders for achieving the lowest MAE. Keywords: Positron Emission Tomography, Maximum-Likelihood Expectation-Maximization, Image Space Reconstruction Algorithm, Filtered Backprojection, Mean Absolute Error, Quantitative Imaging.
La tomographie par Émission de Positons est une technique d'imagerie médicale traçant les procédures fonctionnelles qui se déroulent dans le patient. L'une des applications courantes de cet appareil consiste à performer un diagnostique subjectif à partir des images obtenues. Cependant, l'imagerie quantitative (IQ) permet de performer une analyse objective en plus de nous procurer de l'information additionnelle telle que la courbe temps-activité (CTA) ainsi que des détails visuels qui échappent à l'œil. Le but de ce travail était, en comparant plusieurs algorithmes de reconstruction tels que le ML-EM PSF, le ISRA PSF et les algorithmes qui en découlent ainsi que la rétroprojection filtrée pour une image bidimensionnelle fixe, de développer une analyse robuste sur les performances quantitatives dépendamment de la localisation des régions d'intérêt (RdI), de leur taille, du niveau de bruit dans l'image, de la distribution de l'activité et des paramètres post-lissage. En simulant des acquisitions à partir d'une coupe axiale d'un cerveau digitale sur Matlab, une comparaison quantitative appuyée de figures qualitative en guise d'outils explicatifs a été effectuée pour toutes les techniques de reconstruction à l'aide de l'Erreur Absolue Moyenne (EAM) et de la relation Biais-Variance. Les résultats obtenus démontrent que la performance de chaque algorithme dépend principalement du nombre d'événements enregistré provenant de la RdI ainsi que de la combinaison itération/post-lissage utilisée qui, lorsque choisie adéquatement, permet à la majorité des algorithmes étudiés de donner des quantités similaires dans la majorité des cas. Parmi les 10 techniques analysées, 3 se sont démarquées : ML-EM PSF, ISRA PSF en utilisant les valeurs prévues avec lissage comme facteur de pondération et RPF avec un post-lissage adéquat les principaux prétendants pour atteindre l'EMA minimale. Mots-clés: Tomographie par émission de positons, Maximum-Likelihood Expectation-Maximization, Image Space Reconstruction Algorithm, Rétroprojection Filtrée, Erreur Absolue Moyenne, Imagerie quantitative.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Rára, Michael. "Numerické metody registrace obrazů s využitím nelineární geometrické transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-401571.

Повний текст джерела
Анотація:
The goal of the thesis is creating simple software to modify entry data defected by atmospheric seeing and provide an output image, which is as much close to reality as possible. Another output is a group of images illustrating the move of every input image due to the average image of them.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Unnikrishnan, Harikrishnan. "ANALYSIS OF VOCAL FOLD KINEMATICS USING HIGH SPEED VIDEO." UKnowledge, 2016. http://uknowledge.uky.edu/ece_etds/82.

Повний текст джерела
Анотація:
Vocal folds are the twin in-folding of the mucous membrane stretched horizontally across the larynx. They vibrate modulating the constant air flow initiated from the lungs. The pulsating pressure wave blowing through the glottis is thus the source for voiced speech production. Study of vocal fold dynamics during voicing are critical for the treatment of voice pathologies. Since the vocal folds move at 100 - 350 cycles per second, their visual inspection is currently done by strobosocopy which merges information from multiple cycles to present an apparent motion. High Speed Digital Laryngeal Imaging(HSDLI) with a temporal resolution of up to 10,000 frames per second has been established as better suited for assessing the vocal fold vibratory function through direct recording. But the widespread use of HSDLI is limited due to lack of consensus on the modalities like features to be examined. Development of the image processing techniques which circumvents the need for the tedious and time consuming effort of examining large volumes of recording has room for improvement. Fundamental questions like the required frame rate or resolution for the recordings is still not adequately answered. HSDLI cannot get the absolute physical measurement of the anatomical features and vocal fold displacement. This work addresses these challenges through improved signal processing. A vocal fold edge extraction technique with subpixel accuracy, suited even for hard to record pediatric population is developed first. The algorithm which is equally applicable for pediatric and adult subjects, is implemented to facilitate user inspection and intervention. Objective features describing the fold dynamics, which are extracted from the edge displacement waveform are proposed and analyzed on a diverse dataset of healthy males, females and children. The sampling and quantization noise present in the recordings are analyzed and methods to mitigate them are investigated. A customized Kalman smoothing and spline interpolation on the displacement waveform is found to improve the feature estimation stability. The relationship between frame rate, spatial resolution and vibration for efficient capturing of information is derived. Finally, to address the inability to measure physical measurement, a structured light projection calibrated with respect to the endoscope is prototyped.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Joginipelly, Arjun. "Implementation of Separable & Steerable Gaussian Smoothers on an FPGA." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/98.

Повний текст джерела
Анотація:
Smoothing filters have been extensively used for noise removal and image restoration. Directional filters are widely used in computer vision and image processing tasks such as motion analysis, edge detection, line parameter estimation and texture analysis. It is practically impossible to tune the filters to all possible positions and orientations in real time due to huge computation requirement. The efficient way is to design a few basis filters, and express the output of a directional filter as a weighted sum of the basis filter outputs. Directional filters having these properties are called "Steerable Filters." This thesis work emphasis is on the implementation of proposed computationally efficient separable and steerable Gaussian smoothers on a Xilinx VirtexII Pro FPGA platform. FPGAs are Field Programmable Gate Arrays which consist of a collection of logic blocks including lookup tables, flip flops and some amount of Random Access Memory. All blocks are wired together using an array of interconnects. The proposed technique [2] is implemented on a FPGA hardware taking the advantage of parallelism and pipelining.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Gulo, Carlos Alex Sander Juvêncio [UNESP]. "Técnicas de paralelização em GPGPU aplicadas em algoritmo para remoção de ruído multiplicativo." Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/89336.

Повний текст джерела
Анотація:
Made available in DSpace on 2014-06-11T19:24:00Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-10-17Bitstream added on 2014-06-13T20:30:51Z : No. of bitstreams: 1 gulo_casj_me_sjrp.pdf: 1004896 bytes, checksum: d189543ceda76e9ee5b4a62ae7aaaffa (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
A evolução constante na velocidade de cálculos dos processadores tem sido uma grande aliada no desenvolvimento de áreas da Ciência que exigem processamento de alto desempenho. Associados aos recursos computacionais faz-se necessário o emprego de técnicas de computação paralela no intuito de explorar ao máximo a capacidade de processamento da arquitetura escolhida, bem como, reduzir o tempo de espera no processamento. No entanto, o custo financeiro para aquisição deste tipo dehardwarenão é muito baixo, implicando na busca de alternativas para sua utilização. As arquiteturas de processadores multicoree General Purpose Computing on Graphics Processing Unit(GPGPU), tornam-se opções de baixo custo, pois são projeta-das para oferecer infraestrutura para o processamento de alto desempenho e atender aplicações de tempo real. Com o aperfeiçoamento das tecnologias multicomputador, multiprocessador e GPGPU, a paralelização de técnicas de processamento de imagem tem obtido destaque por vi-abilizar a redução do tempo de processamento de métodos complexos aplicados em imagem de alta resolução. Neste trabalho, é apresentado o estudo e uma abordagem de paralelização em GPGPU, utilizando a arquitetura CUDA, do método de suavização de imagem baseado num modelo variacional, proposto por Jin e Yang (2011), e sua aplicação em imagens com al-tas resoluções. Os resultados obtidos nos experimentos, permitiram obter um speedupde até quinze vezes no tempo de processamento de imagens, comparando o algoritmo sequencial e o algoritmo otimizado paralelizado em CUDA, o que pode viabilizar sua utilização em diversas aplicações de tempo real
Supported by processors evolution, high performance computing have contributed to develop-ment in several scientific research areas which require advanced computations, such as image processing, augmented reality, and others. To fully exploit high performance computing availa-ble in these resources and to decrease processing time, is necessary apply parallel computing. However, those resources are expensive, which implies the search for alternatives ways to use it. The multicore processors architecture andGeneral Purpose Computing on Graphics Proces-sing Unit(GPGPU) become a low cost options, as they were designed to provide infrastructure for high performance computing and attend real-time applications.With the improvements gai-ned in technologies related to multicomputer, multiprocessor and, more recently, to GPGPUs, the parallelization of computational image processing techniques has gained extraordinary pro-minence. This parallelization is crucial for the use of such techniques in applications that have strong demands in terms of processing time, so that even more complex computational algo-rithms can be used, as well as their use on images of higher resolution. In this research, the parallelization in GPGPU of a recent image smoothing method based on a variation model is described and discussed. This method was proposed by Jin and Yang (2011) and is in-demand due to its computation time, and its use with high resolution images. The results obtained are very promising, revealing a speedup about fifteen times in terms of computational speed
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Gulo, Carlos Alex Sander Juvêncio. "Técnicas de paralelização em GPGPU aplicadas em algoritmo para remoção de ruído multiplicativo /." São José do Rio Preto : [s.n.], 2012. http://hdl.handle.net/11449/89336.

Повний текст джерела
Анотація:
Orientador: Antonio Carlos Sementille
Banca: José Remo Ferreira Brega
Banca: Edgard A. Lamounier Junior
Resumo: A evolução constante na velocidade de cálculos dos processadores tem sido uma grande aliada no desenvolvimento de áreas da Ciência que exigem processamento de alto desempenho. Associados aos recursos computacionais faz-se necessário o emprego de técnicas de computação paralela no intuito de explorar ao máximo a capacidade de processamento da arquitetura escolhida, bem como, reduzir o tempo de espera no processamento. No entanto, o custo financeiro para aquisição deste tipo dehardwarenão é muito baixo, implicando na busca de alternativas para sua utilização. As arquiteturas de processadores multicoree General Purpose Computing on Graphics Processing Unit(GPGPU), tornam-se opções de baixo custo, pois são projeta-das para oferecer infraestrutura para o processamento de alto desempenho e atender aplicações de tempo real. Com o aperfeiçoamento das tecnologias multicomputador, multiprocessador e GPGPU, a paralelização de técnicas de processamento de imagem tem obtido destaque por vi-abilizar a redução do tempo de processamento de métodos complexos aplicados em imagem de alta resolução. Neste trabalho, é apresentado o estudo e uma abordagem de paralelização em GPGPU, utilizando a arquitetura CUDA, do método de suavização de imagem baseado num modelo variacional, proposto por Jin e Yang (2011), e sua aplicação em imagens com al-tas resoluções. Os resultados obtidos nos experimentos, permitiram obter um speedupde até quinze vezes no tempo de processamento de imagens, comparando o algoritmo sequencial e o algoritmo otimizado paralelizado em CUDA, o que pode viabilizar sua utilização em diversas aplicações de tempo real
Abstract: Supported by processors evolution, high performance computing have contributed to develop-ment in several scientific research areas which require advanced computations, such as image processing, augmented reality, and others. To fully exploit high performance computing availa-ble in these resources and to decrease processing time, is necessary apply parallel computing. However, those resources are expensive, which implies the search for alternatives ways to use it. The multicore processors architecture andGeneral Purpose Computing on Graphics Proces-sing Unit(GPGPU) become a low cost options, as they were designed to provide infrastructure for high performance computing and attend real-time applications.With the improvements gai-ned in technologies related to multicomputer, multiprocessor and, more recently, to GPGPUs, the parallelization of computational image processing techniques has gained extraordinary pro-minence. This parallelization is crucial for the use of such techniques in applications that have strong demands in terms of processing time, so that even more complex computational algo-rithms can be used, as well as their use on images of higher resolution. In this research, the parallelization in GPGPU of a recent image smoothing method based on a variation model is described and discussed. This method was proposed by Jin and Yang (2011) and is in-demand due to its computation time, and its use with high resolution images. The results obtained are very promising, revealing a speedup about fifteen times in terms of computational speed
Mestre
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Fiala, Martin. "Hardwarová akcelerace filtrace obrazu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412783.

Повний текст джерела
Анотація:
This master's thesis contains introduction to image filtration problems, especially to theoretical outlets, whose origin lies in linear systems theory and mathematical function analysis. There are described some approaches and methods which are used to image smoothing and for edge detection in an image. Mainly Sobel operator, Laplace operator and median filter are covered. The main contents of this project is discussion of some approaches of hardware acceleration of image filtering and design of time effective software and hardware implementations of filters in a form of program functions and combinational circuits using theoretical knowledges about time complexity of algorithms. Hardware and software implementation of named filters was done too. For every filter, time of filtration was measured and results were compared and analyzed.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Hessel, Charles. "La décomposition automatique d'une image en base et détail : Application au rehaussement de contraste." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLN017/document.

Повний текст джерела
Анотація:
Dans cette thèse CIFRE en collaboration entre le Centre de Mathématiques et de leurs Applications, École Normale Supérieure de Cachan et l’entreprise DxO, nous abordons le problème de la décomposition additive d’une image en base et détail. Une telle décomposition est un outil fondamental du traitement d’image. Pour une application à la photographie professionnelle dans le logiciel DxO Photolab, il est nécessaire que la décomposition soit exempt d’artefact. Par exemple, dans le contexte de l’amélioration de contraste, où la base est réduite et le détail augmenté, le moindre artefact devient fortement visible. Les distorsions de l’image ainsi introduites sont inacceptables du point de vue d’un photographe.L’objectif de cette thèse est de trouver et d’étudier les filtres les plus adaptés pour effectuer cette tâche, d’améliorer les meilleurs et d’en définir de nouveaux. Cela demande une mesure rigoureuse de la qualité de la décomposition en base plus détail. Nous examinons deux artefact classiques (halo et staircasing) et en découvrons trois autres types tout autant cruciaux : les halos de contraste, le cloisonnement et les halos sombres. Cela nous conduit à construire cinq mire adaptées pour mesurer ces artefacts. Nous finissons par classer les filtres optimaux selon ces mesures, et arrivons à une décision claire sur les meilleurs filtres. Deux filtres sortent du rang, dont un proposé dans cette thèse
In this CIFRE thesis, a collaboration between the Center of Mathematics and their Applications, École Normale Supérieure de Cachan and the company DxO, we tackle the problem of the additive decomposition of an image into base and detail. Such a decomposition is a fundamental tool in image processing. For applications to professional photo editing in DxO Photolab, a core requirement is the absence of artifacts. For instance, in the context of contrast enhancement, in which the base is reduced and the detail increased, minor artifacts becomes highly visible. The distortions thus introduced are unacceptable from the point of view of a photographer.The objective of this thesis is to single out and study the most suitable filters to perform this task, to improve the best ones and to define new ones. This requires a rigorous measure of the quality of the base plus detail decomposition. We examine two classic artifacts (halo and staircasing) and discover three more sorts that are equally crucial: contrast halo, compartmentalization, and the dark halo. This leads us to construct five adapted patterns to measure these artifacts. We end up ranking the optimal filters based on these measurements, and arrive at a clear decision about the best filters. Two filters stand out, including one we propose
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Silvestre, André Calheiros. "Estabilização digital em tempo real de imagens em seqüência de vídeos." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-25072007-102338/.

Повний текст джерела
Анотація:
Podemos afirmar que deslocamentos da imagem em quadros consecutivos de uma seqüência de vídeo são causados por pequenas vibrações da câmera e/ou movimentos desejados pelo operador da câmera. A estabilização de imagem consiste no processo de remoção de pequenas vibrações que caracterizam movimentos indesejados de uma seqüência de imagens. Com este propósito, atualmente técnicas de processamento digital de vídeo vêm sendo comumente aplicadas na indústria eletrônica. No processo digital de estabilização de imagens são necessários métodos computacionais de estimação, de suavização e de correção de movimento, para os quais, existe uma grande variedade de técnicas de processamento. O emprego de uma técnica específica de processamento é determinado conforme o tipo de aplicação. Técnicas para a estimação de movimento como casamento de blocos (CB), e para a suavização de movimento como filtro de freqüência passa baixa, são freqüentemente encontradas na literatura. Este trabalho apresenta um sistema de estabilização digital de imagens em tempo real capturadas por uma câmera digital, estimando e compensando movimentos translacionais e rotacionais indesejados.
Undesirable shakes or jiggles, object motion within image or desirable motions caused by the camera operator causes image differences in consecutive frames of video sequences. The image stabilization consists of the process of removing inevitable and undesirable fluctuations, shakes and jiggles; with this purpose, nowadays digital processing techniques have been commonly applied in the electronic industry. On the digital processing of image stabilization, computational methods of estimation, smoothing and motion correction are necessary. In the literature various digital processing techniques for image stabilization are described, the most suitable technique should be chosen according to the kind of application. Techniques such as block matching used in motion estimation and low-pass filters used in motion smoothing are found in a great number of papers. This work presents a real time digital image stabilization system capable of stabilizing video sequences with undesirable translational and rotational displacements between frames.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Šmirg, Ondřej. "Tvorba 3D modelu čelistního kloubu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-233691.

Повний текст джерела
Анотація:
The dissertation thesis deals with 3D reconstruction of the temporomandibular joint from 2D slices of tissue obtained by magnetic resonance. The current practice uses 2D MRI slices in diagnosing. 3D models have many advantages for the diagnosis, which are based on the knowledge of spatial information. Contemporary medicine uses 3D models of tissues, but with the temporomandibular joint tissues there is a problem with segmenting the articular disc. This small tissue, which has a low contrast and very similar statistical characteristics to its neighborhood, is very complicated to segment. For the segmentation of the articular disk new methods were developed based on the knowledge of the anatomy of the joint area of the disk and on the genetic-algorithm-based statistics. A set of 2D slices has different resolutions in the x-, y- and z-axes. An up-sampling algorithm, which seeks to preserve the shape properties of the tissue was developed to unify the resolutions in the axes. In the last phase of creating 3D models standard methods were used, but these methods for smoothing and decimating have different settings (number of polygons in the model, the number of iterations of the algorithm). As the aim of this thesis is to obtain the most precise model possible of the real tissue, it was necessary to establish an objective method by which it would be possible to set the algorithms so as to achieve the best compromise between the distortion and the model credibility achieve.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Heinrich, André. "Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration." Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108923.

Повний текст джерела
Анотація:
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Rau, Christian, and rau@maths anu edu au. "Curve Estimation and Signal Discrimination in Spatial Problems." The Australian National University. School of Mathematical Sciences, 2003. http://thesis.anu.edu.au./public/adt-ANU20031215.163519.

Повний текст джерела
Анотація:
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Agathangelou, Marios Christaki. "Contour smoothing in segmented images for object-based compression." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286288.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Li, Jian-Cheng. "Generation of simulated ultrasound images using a Gaussian smoothing function." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179261418.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Jibai, Nassim. "Multiscale Feature-Preserving Smoothing of Images and Volumes on the GPU." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00748064.

Повний текст джерела
Анотація:
Two-dimensional images and three-dimensional volumes have become a staple ingredient of our artistic, cultural, and scientific appetite. Images capture and immortalize an instance such as natural scenes, through a photograph camera. Moreover, they can capture details inside biological subjects through the use of CT (computer tomography) scans, X-Rays, ultrasound, etc. Three-dimensional volumes of objects are also of high interest in medical imaging, engineering, and analyzing cultural heritage. They are produced using tomographic reconstruction, a technique that combine a large series of 2D scans captured from multiple views. Typically, penetrative radiation is used to obtain each 2D scan: X-Rays for CT scans, radio-frequency waves for MRI (magnetic resonance imaging), electron-positron annihilation for PET scans, etc. Unfortunately, their acquisition is influenced by noise caused by different factors. Noise in two-dimensional images could be caused by low-light illumination, electronic defects, low-dose of radiation, and a mispositioning tool or object. Noise in three-dimensional volumes also come from a variety of sources: the limited number of views, lack of captor sensitivity, high contrasts, the reconstruction algorithms, etc. The constraint that data acquisition be noiseless is unrealistic. It is desirable to reduce, or eliminate, noise at the earliest stage in the application. However, removing noise while preserving the sharp features of an image or volume object remains a challenging task. We propose a multi-scale method to smooth 2D images and 3D tomographic data while preserving features at a specified scale. Our algorithm is controlled using a single user parameter - the minimum scale of features to be preserved. Any variation that is smaller than the specified scale is treated as noise and smoothed, while discontinuities such as corners, edges and detail at a larger scale are preserved. We demonstrate that our smoothed data produces clean images and clean contour surfaces of volumes using standard surface-extraction algorithms. In addition to, we compare our results with results of previous approaches. Our method is inspired by anisotropic diffusion. We compute our diffusion tensors from the local continuous histograms of gradients around each pixel in image
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Jibai, Nassim. "Multi-scale Feature-Preserving Smoothing of Images and Volumes on GPU." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM025/document.

Повний текст джерела
Анотація:
Les images et données volumiques sont devenues importantes dans notre vie quotidienne que ce soit sur le plan artistique, culturel, ou scientifique. Les données volumiques ont un intérêt important dans l'imagerie médicale, l'ingénierie, et l'analyse du patrimoine culturel. Ils sont créées en utilisant la reconstruction tomographique, une technique qui combine une large série de scans 2D capturés de plusieur points de vue. Chaque scan 2D est obtenu par des methodes de rayonnement : Rayons X pour les scanners CT, ondes radiofréquences pour les IRM, annihilation électron-positron pour les PET scans, etc. L'acquisition des images et données volumique est influencée par le bruit provoqué par différents facteurs. Le bruit dans les images peut être causée par un manque d'éclairage, des défauts électroniques, faible dose de rayonnement, et un mauvais positionnement de l'outil ou de l'objet. Le bruit dans les données volumique peut aussi provenir d'une variété de sources : le nombre limité de points de vue, le manque de sensibilité dans les capteurs, des contrastes élevé, les algorithmes de reconstruction employés, etc. L'acquisition de données non bruitée est iréalisable. Alors, il est souhaitable de réduire ou d'éliminer le bruit le plus tôt possible dans le pipeline. La suppression du bruit tout en préservant les caractéristiques fortes d'une image ou d'un objet volumique reste une tâche difficile. Nous proposons une méthode multi-échelle pour lisser des images 2D et des données tomographiques 3D tout en préservant les caractéristiques à l'échelle spécifiée. Notre algorithme est contrôlé par un seul paramètre – la taille des caractéristiques qui doivent être préservées. Toute variation qui est plus petite que l'échelle spécifiée est traitée comme bruit et lissée, tandis que les discontinuités telles que des coins, des bords et des détails à plus grande échelle sont conservés. Nous démontrons les données lissées produites par notre algorithme permettent d'obtenir des images nettes et des iso-surfaces plus propres. Nous comparons nos résultats avec ceux des methodes précédentes. Notre méthode est inspirée par la diffusion anisotrope. Nous calculons nos tenseurs de diffusion à partir des histogrammes continues locaux de gradients autour de chaque pixel dans les images et autour de chaque voxel dans des volumes. Comme notre méthode de lissage fonctionne entièrement sur GPU, il est extrêmement rapide
Two-dimensional images and three-dimensional volumes have become a staple ingredient of our artistic, cultural, and scientific appetite. Images capture and immortalize an instance such as natural scenes, through a photograph camera. Moreover, they can capture details inside biological subjects through the use of CT (computer tomography) scans, X-Rays, ultrasound, etc. Three-dimensional volumes of objects are also of high interest in medical imaging, engineering, and analyzing cultural heritage. They are produced using tomographic reconstruction, a technique that combine a large series of 2D scans captured from multiple views. Typically, penetrative radiation is used to obtain each 2D scan: X-Rays for CT scans, radio-frequency waves for MRI (magnetic resonance imaging), electron-positron annihilation for PET scans, etc. Unfortunately, their acquisition is influenced by noise caused by different factors. Noise in two-dimensional images could be caused by low-light illumination, electronic defects, low-dose of radiation, and a mispositioning tool or object. Noise in three-dimensional volumes also come from a variety of sources: the limited number of views, lack of captor sensitivity, high contrasts, the reconstruction algorithms, etc. The constraint that data acquisition be noiseless is unrealistic. It is desirable to reduce, or eliminate, noise at the earliest stage in the application. However, removing noise while preserving the sharp features of an image or volume object remains a challenging task. We propose a multi-scale method to smooth 2D images and 3D tomographic data while preserving features at a specified scale. Our algorithm is controlled using a single user parameter – the minimum scale of features to be preserved. Any variation that is smaller than the specified scale is treated as noise and smoothed, while discontinuities such as corners, edges and detail at a larger scale are preserved. We demonstrate that our smoothed data produces clean images and clean contour surfaces of volumes using standard surface-extraction algorithms. In addition to, we compare our results with results of previous approaches. Our method is inspired by anisotropic diffusion. We compute our diffusion tensors from the local continuous histograms of gradients around each pixel in image
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Jiang, Long yu. "Séparation et détection des trajets dans un guide d'onde en eau peu profonde." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00765238.

Повний текст джерела
Анотація:
En acoustique sous marine, les 'etudes sur les zones en eau peu profondes sontredevenues strat'egiques. Cette th'ese porte sur l' 'etude de la s'eparation et la d'etectionde trajet dans le cadre des eaux peu profondes tomographie acoustique oc'eanique. Dansune premi'ere 'etape de notre travail, nous avons donn'e un bref aperc¸u sur les techniquesexistantes de traitement acoustique sous-marine afin de trouver la difficult'e toujoursconfront'es 'a ce type de m'ethodes. Par cons'equent, nous avons fait une conclusion qu'ilest encore n'e cessaire d'am'eliorer la r'esolution de s'eparation afin de fournir des informationsplus utiles pour l' 'etape inverse de la tomographie acoustique oc'eanique.Ainsi, une enquˆete sur les mthodes haute r'esolution est effecut'ee. Enfin, nous avonspropos'e une m'ethode 'a haute r'esolution appel'ee lissage MUSICAL (MUSIC Active largeband), qui combine le lissage de fr'equence spatiale avec l'algorithme MUSICAL, pourune s'eparation efficace de trajet coh'erentes ou totalement corr'el'es. Cependant, cettem'ethode est bas'ee sur la connaissance a priori du nombre de trajet. Ainsi, nous introduisonsun test (exponential fitting test) (EFT) 'a l'aide de courte longueur des 'echantillonspour d'eterminer le nombre de trajets. Ces deux m'ethodes sont appliqu'ees 'a la fois desdonn'ees synth'etiques et les donn'ees r'eelles acquises dans un r'eservoir 'a petite 'echelle.Leurs performances sont compar'ees avec les m'ethodes conventionnelles pertinentes.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Pizarro, Luis [Verfasser], and Joachim [Akademischer Betreuer] Weickert. "Nonlocal smoothing and adaptive morphology for scalar- and matrix-valued images / Luis Pizarro. Betreuer: Joachim Weickert." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/1051279585/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Lindeberg, Tony. "Discrete Scale-Space Theory and the Scale-Space Primal Sketch." Doctoral thesis, KTH, Numerisk analys och datalogi, NADA, 1991. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-58570.

Повний текст джерела
Анотація:
This thesis, within the subfield of computer science known as computer vision, deals with the use of scale-space analysis in early low-level processing of visual information. The main contributions comprise the following five subjects: The formulation of a scale-space theory for discrete signals. Previously, the scale-space concept has been expressed for continuous signals only. We propose that the canonical way to construct a scale-space for discrete signals is by convolution with a kernel called the discrete analogue of the Gaussian kernel, or equivalently by solving a semi-discretized version of the diffusion equation. Both the one-dimensional and two-dimensional cases are covered. An extensive analysis of discrete smoothing kernels is carried out for one-dimensional signals and the discrete scale-space properties of the most common discretizations to the continuous theory are analysed. A representation, called the scale-space primal sketch, which gives a formal description of the hierarchical relations between structures at different levels of scale. It is aimed at making information in the scale-space representation explicit. We give a theory for its construction and an algorithm for computing it. A theory for extracting significant image structures and determining the scales of these structures from this representation in a solely bottom-up data-driven way. Examples demonstrating how such qualitative information extracted from the scale-space primal sketch can be used for guiding and simplifying other early visual processes. Applications are given to edge detection, histogram analysis and classification based on local features. Among other possible applications one can mention perceptual grouping, texture analysis, stereo matching, model matching and motion. A detailed theoretical analysis of the evolution properties of critical points and blobs in scale-space, comprising drift velocity estimates under scale-space smoothing, a classification of the possible types of generic events at bifurcation situations and estimates of how the number of local extrema in a signal can be expected to decrease as function of the scale parameter. For two-dimensional signals the generic bifurcation events are annihilations and creations of extremum-saddle point pairs. Interpreted in terms of blobs, these transitions correspond to annihilations, merges, splits and creations. Experiments on different types of real imagery demonstrate that the proposed theory gives perceptually intuitive results.

QC 20120119

Стилі APA, Harvard, Vancouver, ISO та ін.
38

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Повний текст джерела
Анотація:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

"Edge-enhancing image smoothing." 2011. http://library.cuhk.edu.hk/record=b5894822.

Повний текст джерела
Анотація:
Xu, Yi.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (p. 62-69).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Organization --- p.4
Chapter 2 --- Background and Motivation --- p.7
Chapter 2.1 --- ID Mondrian Smoothing --- p.9
Chapter 2.2 --- 2D Formulation --- p.13
Chapter 3 --- Solver --- p.16
Chapter 3.1 --- More Analysis --- p.20
Chapter 4 --- Edge Extraction --- p.26
Chapter 4.1 --- Related work --- p.26
Chapter 4.2 --- Method and Results --- p.28
Chapter 4.3 --- Summary --- p.32
Chapter 5 --- Image Abstraction and Pencil Sketching --- p.35
Chapter 5.1 --- Related Work --- p.35
Chapter 5.2 --- Method and Results --- p.36
Chapter 5.3 --- Summary --- p.40
Chapter 6 --- Clip-Art Compression Artifact Removal --- p.41
Chapter 6.1 --- Related work --- p.41
Chapter 6.2 --- Method and Results --- p.43
Chapter 6.3 --- Summary --- p.46
Chapter 7 --- Layer-Based Contrast Manipulation --- p.49
Chapter 7.1 --- Related Work --- p.49
Chapter 7.2 --- Method and Results --- p.50
Chapter 7.2.1 --- Edge Adjustment --- p.51
Chapter 7.2.2 --- Detail Magnification --- p.54
Chapter 7.2.3 --- Tone Mapping --- p.55
Chapter 7.3 --- Summary --- p.56
Chapter 8 --- Conclusion and Discussion --- p.59
Bibliography --- p.61
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Lumsdaine, A., J. L. Jr Wyatt, and I. M. Elfadel. "Nonlinear Analog Networks for Image Smoothing and Segmentation." 1991. http://hdl.handle.net/1721.1/5983.

Повний текст джерела
Анотація:
Image smoothing and segmentation algorithms are frequently formulatedsas optimization problems. Linear and nonlinear (reciprocal) resistivesnetworks have solutions characterized by an extremum principle. Thus,sappropriately designed networks can automatically solve certainssmoothing and segmentation problems in robot vision. This papersconsiders switched linear resistive networks and nonlinear resistivesnetworks for such tasks. The latter network type is derived from thesformer via an intermediate stochastic formulation, and a new resultsrelating the solution sets of the two is given for the "zerostermperature'' limit. We then present simulation studies of severalscontinuation methods that can be gracefully implemented in analog VLSIsand that seem to give "good'' results for these non-convexsoptimization problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Chia-Yung, Jui. "The Application of Diffusion Equation in Image Smoothing." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0005-1301200723214300.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Lin, Pei-Chien, and 林沛鑑. "A New Edge Smoothing Method for Image Enlargement." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/88959066313828085126.

Повний текст джерела
Анотація:
碩士
國立中興大學
資訊科學與工程學系所
99
Image up-scaling is an important part of the digital image processing. Up to now many algorithms have been proposed to enlarge images. The nearest-neighborhood interpolation, the bilinear interpolation, the bicubic interpolation are three important algorithms among them. However, these algorithms still suffer the draw backs of the blurry effect and/or the blocky effect on edge portions of the enlarged image. In this thesis, a new method for enlarging images is proposed. This method tries to reduce the blurry effect and the blocky effect by first using an existing image up-scaling method such as the nearest-neighborhood interpolation or the bilinear interpolation to enlarge the image, then trying to smooth the edge portions of the enlarged image. The Canny edge detector is used to find the edges of the image. Experimental results show that the proposed method has better performance on the edge portions of the enlarged image.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jui, Chia-Yung, and 芮嘉勇. "The Application of Diffusion Equation in Image Smoothing." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/85134048276396242407.

Повний текст джерела
Анотація:
碩士
國立中興大學
應用數學系所
95
In this thesis we present the application of diffusion equation which appears in image smoothing. The nonlinear diffusion equation has been shown that it can smooth the image and retain the edges at the same time. Here, we propose convection-diffusion equation to be an extension of nonlinear equation. A modified finite element scheme will be presented to prevent the numerical oscillation caused by discontinuous solution. Numerical results have shown the advantage of the image smoothing algorithm by using the convection-diffusion equation with modified finite element scheme.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Shr, Yu-Chin, and 史玉琴. "Color Image Compression by Adaptive Region Growing and False Contours Smoothing." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/20889090488658299883.

Повний текст джерела
Анотація:
碩士
國立交通大學
資訊科學學系
86
In this thesis, we propose a segmentation-based compression method for color images. First, we split the image by bottom-up quadtree decomposition, and then merge by adaptive region growing. After that, the contours and textures of the segmented regions are encoded. We encode the region texture by polynomial approximation, and encode the region contour using chain code. Finally, because there are some distortion in the reconstructed images, we detect the false contours and apply our smoothing algorithm to get a better image. Experimental results show that most of the complex texture can be reconstructed, and after the postprocessing we can usually have a better image quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Hillebrand, Martin [Verfasser]. "On robust corner preserving smoothing in image processing / von Martin Hillebrand." 2003. http://d-nb.info/967514444/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Bashala, Jenny Mwilambwe. "Development of a new image compression technique using a grid smoothing technique." 2013. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001112.

Повний текст джерела
Анотація:
M. Tech. Electrical Engineering.
Aims to implement a lossy image compression scheme that uses a graph-based approach. On the one hand, this new method should reach high compression rates with good visual quality, while on the other hand it may lead to the following sub-problems:efficient classification of image data with the use of bilateral mesh filtering ; Transformation of the image into a graph with grid smoothing ; reduction of the graph by means of mesh decimation techniques ; reconstruction process of the reduced graph into an image and quality analysis of the reconstructed images.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lo, Chung-Ming, and 羅崇銘. "Region-based image retrieval system using perceptual smoothing and region adjacency graph." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/70173868280600288134.

Повний текст джерела
Анотація:
碩士
國立中正大學
資訊工程研究所
91
To improve accuracy in content-based image retrieval, region-based retrieval has adopted according to the concept of objects by human. The proposed region-based image retrieval utilizes not only the dominant features of each region but also the correlations between neighboring regions. The improved drainage watershed transformation is used to provide the effective and accurate object segmentation. In order to be more sensitive to color variation, color images are partitioned by extracting features from the HSV color space. Moreover, small details and noise are removed by the perceptual smoothing for reducing oversegmentation. Then, through the region adjacency graph (RAG), each region is characterized by its salient lower-level features and the whole image is presented by the semantic high-level image understanding. In a graph matching procedure, the comparisons between the RAGs of query image and each image of database performs the retrieval. In addition, a simple and efficient algorithm of subgraph isomorphism is adopted to reduce compare time. Experimental results will exhibit and evaluate the performance of the proposed image retrieval system.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Wu, Wei-Chen, and 吳維宸. "Edge Curve Scaling and Smoothing with Cubic Spline Interpolation for Image Up-Scaling." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/00072826594831767376.

Повний текст джерела
Анотація:
碩士
國立清華大學
資訊工程學系
101
Image up-scaling is an important technique to increase the resolution of an image. While earlier interpolation based approaches such as the bilinear and the bicubic method cause blurring and ringing artifacts in edge regions of the up-scaled image due to the loss of high frequency details. Recent approaches such as the local-self example super resolution can achieve very promising up-scaling results while their computation cost are high because they recover high frequency components of the whole image. In this paper, we proposed an image up-scaling method via an up-scaled edge map. By predicting edge regions of the up-scaled image, we recover high frequency components of edge regions of the up-scaled image to improve the sharpness and reduce ringing artifacts. We propose an edge curve scaling method with cubic spline interpolation to up-scale an edge map. If an edge curve is directly applied to the cubic spline interpolation function for edge curve up-scaling , the edge curve scaling results have zigzag artifacts. We also propose a simple smoothing function to avoid the zigzag problems and maintain the contour shape of images. Our methods can reduce execution time by 90% because we only perform high frequency components recovery on edge regions while other methods adopt to recover the high frequency components of every points in the up-scaled image. Experimental results show that we can achieve similar performances with the local self example super resolution method.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Gao, Song. "A new image segmentation and smoothing method based on the Mumford-Shah variational model." Thesis, 2003. http://spectrum.library.concordia.ca/2395/1/MQ91033.pdf.

Повний текст джерела
Анотація:
Recently Chan and Vese have developed an active contour model for image segmentation and smoothing. Tsai et al. have also developed a similar approach independently. In this thesis, we develop a new hierarchical method which has many advantages compared to the Chan and Vese multiphase active contour models. First , unlike previous works, the curve evolution partial differential equations (PDEs) for different level set functions are decoupled. Each curve evolution PDE is the equation of motion of just one level set function; and different level set equations of motion are solved in a hierarchy. This decoupling of the motion equations of the level set functions speeds up the segmentation process significantly. Secondly , because of the coupling of the curve evolution equations associated with different level set functions, the initialization of the level sets in Chan and Vese's method is difficult to handle. The hierarchical method proposed in this thesis can avoid the problem due to the choice of initial conditions. Thirdly , we use the diffusion equation for denoising. This method therefore can deal with very noisy images. In general, our method is fast, flexible, not sensitive to the choice of initial conditions, and produces very good results.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Pao, Shun-An, and 包順安. "Using Image Smoothing and Feature Extraction to Improve the Classification Accuracy of Hyperspectral Imagery." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/08455857999516805429.

Повний текст джерела
Анотація:
碩士
國立成功大學
測量工程學系
87
Multispectral sensors have been widely used to observe Earth surface since the 1960’s. However, traditional sensors were limited, due to the inadequate sensor technology, to collect spectral data less than 20 bands. In recent years, spectral image sensors have been improved to have the ability in collecting spectral data in several hundred bands, which are called hyperspectral image scanners. For example, the AVIRIS scanners developed by JPL of NASA provide 224 contiguous spectral channels. Theoretically, using hyperspectral images should increase our abilities in classifying land use/cover types. However, the data classification approach that has been successfully applied to multispectral data in the past is not as effective for hyperspectral data as well. As the dimensionality increases, the number of training samples needed to characterize the classes increases as well. If the number of training samples available fails to catch up with the need, which is the case for hyperspectral data, parameter estimation becomes inaccurate. The classification accuracy first grows and then declines as the number of spectral band increases, which is often referred to as the Hughes phenomenon. Generally speaking, classification performance depends on four factors: class separability, the training sample size, dimensionality, and classifier type. To improve classification performance, attention is often focused on seeking improvements on the factors other than class separability because class separability is usually considered inherent and predetermined. The objective of this paper is to call attention to the fact that class separability can be increased. The lowpass filter is proposed as a means for increasing class separability if a data set consists of multi-pixel objects. By employing feature extraction, the number of features can be reduced substantially without sacrificing significant information. This thesis review some feature extraction methods have been developed to speed up the process and increase the precision of classification. For the classifier type, we use Gaussian Maximum Likelihood classifier. Our experiments show that when the number of training samples is relatively small compared to the dimensionality, maximum likelihood estimates of parameters have large variances, leading to a large classification error. The lowpass spatial filter can increase class separability and classification accuracy. By employing feature extraction, when the ratio of the training sample size to dimensionality is 4 obtained good results. Feature extraction associate with the lowpass spatial filter can increase classification accuracy more
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії