To see the other types of publications on this topic, follow the link: Multiresolution analysis.

Dissertations / Theses on the topic 'Multiresolution analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multiresolution analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Brennan, Victor L. "Principal component analysis with multiresolution." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/ank7079/brennan%5Fdissertation.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains xi, 124 p.; also contains graphics. Vita. Includes bibliographical references (p. 120-123).
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Yong Ku. "Bayesian multiresolution dynamic models." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180465799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carter, Duane B. "Analysis of Multiresolution Data fusion Techniques." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36609.

Full text
Abstract:
In recent years, as the availability of remote sensing imagery of varying resolution has increased, merging images of differing spatial resolution has become a significant operation in the field of digital remote sensing. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image of the same geographic area. This study examines properties of fused images and their ability to preserve the spectral integrity of the original image. It analyzes five current data fusion techniques for three complex scenes to assess their performance. The five data fusion models used include one spatial domain model (High-Pass Filter), two algebraic models (Multiplicative and Brovey Transform), and two spectral domain models (Principal Components Transform and Intensity-Hue-Saturation). SPOT data were chosen for both the panchromatic and multispectral data sets. These data sets were chosen for the high spatial resolution of the panchromatic (10 meters) data, the relatively high spectral resolution of the multispectral data, and the low spatial resolution ratio of two to one (2:1). After the application of the data fusion techniques, each merged image was analyzed statistically, graphically, and for increased photointerpretive potential as compared with the original multispectral images. While all of the data fusion models distorted the original multispectral imagery to an extent, both the Intensity-Hue-Saturation Model and the High-Pass Filter model maintained the original qualities of the multispectral imagery to an acceptable level. The High-Pass Filter model, designed to highlight the high frequency spatial information, provided the most noticeable increase in spatial resolution.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Srinivas, Sushma. "DETECTING VULNERABLE PLAQUES WITH MULTIRESOLUTION ANALYSIS." Cleveland State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=csu1326932229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kruger, Stefan A. "Motion analysis and estimation using multiresolution affine models." Thesis, University of Bristol, 1998. http://hdl.handle.net/1983/f1c3201e-cc47-4064-a897-5264498767bf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Romero, Juan R. "Generalized multiresolution analysis construction and measure theoretic characterization /." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2942.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Fangwei. "Multiresolution analysis of ultrasound images of the prostate." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0028.

Full text
Abstract:
[Truncated abstract] Transrectal ultrasound (TRUS) has become the urologist’s primary tool for diagnosing and staging prostate cancer due to its real-time and non-invasive nature, low cost, and minimal discomfort. However, the interpretation of a prostate ultrasound image depends critically on the experience and expertise of a urologist and is still difficult and subjective. To overcome the subjective interpretation and facilitate objective diagnosis, computer aided analysis of ultrasound images of the prostate would be very helpful. Computer aided analysis of images may improve diagnostic accuracy by providing a more reproducible interpretation of the images. This thesis is an attempt to address several key elements of computer aided analysis of ultrasound images of the prostate. Specifically, it addresses the following tasks: 1. modelling B-mode ultrasound image formation and statistical properties; 2. reducing ultrasound speckle; and 3. extracting prostate contour. Speckle refers to the granular appearance that compromises the image quality and resolution in optics, synthetic aperture radar (SAR), and ultrasound. Due to the existence of speckle the appearance of a B-mode ultrasound image does not necessarily relate to the internal structure of the object being scanned. A computer simulation of B-mode ultrasound imaging is presented, which not only provides an insight into the nature of speckle, but also a viable test-bed for any ultrasound speckle reduction methods. Motivated by analysis of the statistical properties of the simulated images, the generalised Fisher-Tippett distribution is empirically proposed to analyse statistical properties of ultrasound images of the prostate. A speckle reduction scheme is then presented, which is based on Mallat and Zhong’s dyadic wavelet transform (MZDWT) and modelling statistical properties of the wavelet coefficients and exploiting their inter-scale correlation. Specifically, the squared modulus of the component wavelet coefficients are modelled as a two-state Gamma mixture. Interscale correlation is exploited by taking the harmonic mean of the posterior probability functions, which are derived from the Gamma mixture. This noise reduction scheme is applied to both simulated and real ultrasound images, and its performance is quite satisfactory in that the important features of the original noise corrupted image are preserved while most of the speckle noise is removed successfully. It is also evaluated both qualitatively and quantitatively by comparing it with median, Wiener, and Lee filters, and the results revealed that it surpasses all these filters. A novel contour extraction scheme (CES), which fuses MZDWT and snakes, is proposed on the basis of multiresolution analysis (MRA). Extraction of the prostate contour is placed in a multi-scale framework provided by MZDWT. Specifically, the external potential functions of the snake are designated as the modulus of the wavelet coefficients at different scales, and thus are “switchable”. Such a multi-scale snake, which deforms and migrates from coarse to fine scales, eventually extracts the contour of the prostate
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Qingde. "Multiresolution analysis on non-abelian locally compact groups." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0018/NQ43523.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Fangwei. "Multiresolution analysis of ultrasound images of the prostate /." Connect to this title, 2003. http://theses.library.uwa.edu.au/adt-WU2004.0028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Davies, Andrew Richard. "Image feature analysis using the Multiresolution Fourier Transform." Thesis, University of Warwick, 1993. http://wrap.warwick.ac.uk/50490/.

Full text
Abstract:
The problem of identifying boundary contours or line structures is widely recognised as an important component in many applications of image analysis and computer vision. Typical solutions to the problem employ some form of edge detection followed by line following or, more commonly in recent years, Hough transforms. Because of the processing requirements of such methods and to try to improve the robustness of the algorithms, a number of authors have explored the use of multiresolution approaches to the problem. Non-parametric, iterative approaches such as relaxation labelling and "Snakes" have also been used. This thesis presents a boundary detection algorithm based on a multiresolution image representation, the Multiresolution Fourier Transform (MFT), which represents an image over a range of spatial/spatial-frequency resolutions. A quadtree based image model is described in which each leaf is a region which can be modelled using one of a set of feature classes. Consideration is given to using linear and circular arc features for this modelling, and frequency domain models are developed for them. A general model based decision process is presented and shown to be applicable to detecting local image features, selecting the most appropriate scale for modelling each region of the image and linking the local features into the region boundary structures of the image. The use of a consistent inference process for all of the subtasks used in the boundary detection represents a significant improvement over the adhoc assemblies of estimation and detection that have been common in previous work. Although the process is applied using a restricted set of local features, the framework presented allows for expansion of the number of boundary feature models and the possible inclusion of models of region properties. Results are presented demonstrating the effective application of these procedures to a number of synthetic and natural images.
APA, Harvard, Vancouver, ISO, and other styles
11

Lounsbery, John Michael. "Multiresolution analysis for surfaces of arbitrary topological type /." Thesis, Connect to this title online; UW restricted, 1994. http://hdl.handle.net/1773/6998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Carannante, Simona <1976&gt. "Multiresolution spherical wavelet analysis in global seismic tomography." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/871/1/Tesi_Carannante_Simona.pdf.

Full text
Abstract:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
APA, Harvard, Vancouver, ISO, and other styles
13

Carannante, Simona <1976&gt. "Multiresolution spherical wavelet analysis in global seismic tomography." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/871/.

Full text
Abstract:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
APA, Harvard, Vancouver, ISO, and other styles
14

Eastlick, Mark Thomas. "Discrete differential geometry and an application in multiresolution analysis." Thesis, University of Sheffield, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Xu, Daoyi. "Texture analysis and synthesis using the multiresolution Fourier transform." Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/107535/.

Full text
Abstract:
In this thesis, a new frequency domain approach to analysis of texture is presented, in which both the statistical and structural aspects of the problem are combined in a unified framework, the Multiresolution Fourier Transform. (MFT). The analysis scheme consists of two main components: texture synthesis and texture segmentation. The synthesis method works by identifying, for pairs of texture ‘patches’ of a given size, the affine co-ordinate transformation which gives the best match between them. This allows the analysis to take account of the geometric warping which is typically found in images of natural textures. By variation of scale, using the MFT, it is possible to identify the scale of the texels giving the best results in the matching process. The technique has the potential to deal effectively with textures having varying amounts of structure and can be used both for segmentation and resynthesis of textures from a single prototype block. The texture segmentation makes use of the localisation properties of the MFT to detect texture boundaries using the MFT coefficient magnitudes, which incorporate both boundary and region information, in order to place texture boundaries accurately. A segmentation algorithm is described, starting with pre-smoothing to reduce texture fluctuations followed by an edge detection based on a Sobel operator to give an initial texture boundary estimate. Both boundary enhancement and region averaging are accomplished by adopting a 'link probability function’ to introduce dependence on neighbouring data, allowing the boundary to be refined successively to achieve segmentation. The method effectively uses the spatial consistency of boundary estimates at larger scales to propagate more reliable information across scales to improve the accuracy of the boundary estimate. Experimental results are presented for a number of synthetic and natural images having varying degrees of structural regularity and show the efficacy of the methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Grieb, Neal Phillip. "Multiresolution analysis for adaptive refinement of multiphase flow computations." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/677.

Full text
Abstract:
Flows around immersed boundaries exhibit many complex, well defined and active dynamical structures. In fact, features such as shock waves, strong vorticity concentrations in shear layers, wakes, or boundary layer regions are critical elements in representing the dynamics of a flow field. In order to capture the correct kinematic and dynamic quantities associated with the fluid flows, one must be able to efficiently refine the computational mesh around areas containing high gradients of pressure, density, velocity, or other suitable flowfield variables that characterize distinct structures. Although there are techniques which utilize simple gradient-based Local Mesh Refinement (LMR) to adapt resolution selectively to capture structures in the flow, such methods lack the ability to refine structures based on the relative strengths and scales of structures that are presented in the flow. The inability to adequately define the strength and scale of structures typically results in the mesh being over-refined in regions of little consequence to the physical definition of the problem, under-refined in certain regions resulting in the loss of important features, or even the emergence of false features due to perturbations in the flowfield caused by unnecessary mesh refinement. On the other hand, significant user judgment is required to develop a "good enough" mesh for a given flow problem, so that important structures in the flowfield can be resolved. In order to overcome this problem, multiresolution techniques based on the wavelet transform are explored for feature identification and refinement. Properties and current uses of these functional transforms in fluid flow computations will be briefly discussed. A Multiresolution Transform (MRT) scheme is chosen for identifying coherent structures because of its ability to capture the scale and relative intensity of a structure, and its easy application on non-uniform meshes. The procedure used for implementation of the MRT on an octree/quadtree LMR mesh is discussed in detail, and techniques used for the identification and capture of jump discontinuities and scale information are also presented. High speed compressible flow simulations are presented for a number of cases using the described MRT LMR scheme. MRT based mesh refinement performance is analyzed and further suggestions are made for refinement parameters based on resulting refinement. The key contribution of this thesis is the identification of methods that lead to a robust, general (i.e. not requiring user-defined parameters) methodology to identify structures in compressible flows (shocks, slip lines, vertical patterns) and to direct refinement to adequately refine these structures. The ENO-MRT LMR scheme is shown to be a robust, automatic, and relatively inexpensive way of gaining accurate refinement of the major features contained in the flowfield.
APA, Harvard, Vancouver, ISO, and other styles
17

Yi, Ju Y. "Definition and Construction of Entropy Satisfying Multiresolution Analysis (MRA)." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5057.

Full text
Abstract:
This paper considers some numerical schemes for the approximate solution of conservation laws and various wavelet methods are reviewed. This is followed by the construction of wavelet spaces based on a polynomial framework for the approximate solution of conservation laws. Construction of a representation of the approximate solution in terms of an entropy satisfying Multiresolution Analysis (MRA) is defined. Finally, a proof of convergence of the approximate solution of conservation laws using the characterization provided by the basis functions in the MRA will be given.
APA, Harvard, Vancouver, ISO, and other styles
18

Grecchi, Elisabetta. "Multiresolution image analysis : innovative applications for Positron Emission Tomography in clinical practice." Thesis, King's College London (University of London), 2016. https://kclpure.kcl.ac.uk/portal/en/theses/multiresolution-image-analysis(b7d71602-2274-426f-a81d-22327efd3402).html.

Full text
Abstract:
Positron Emission Tomography is an excellent tool to image physiological processes in vivo and it is of great potential when it comes to disease staging for targeted therapies. However, the potential of PET imaging is somewhat limited by its low spatial resolution with resulting significant partial volume effect (PVE) that deteriorates the accuracy of the quantification of the physiological process under scrutiny. In this context, the use of multimodality imaging is very convenient to resolve this limitation. Using novel techniques based on a multiresolution approach, it is possible to recover PET resolution by a synergistic coupling of the PET images with the anatomical counterpart, either CT or MRI. The multiresolution analysis is performed through a wavelet decomposition of both functional and anatomical images which has been used already in the past with similar purposes. The aim of this thesis is to present novel multiresolution partial volume correction (PVC) techniques that target two different clinical applications. The first part of the project aims to correct for PVE in order to improve the clinical assessment of [18F]Fluoride PET/CT imaging in presence of bone metastasis from prostate and breast cancer. In the second part of the project we develop a different PVC multiresolution approach aiming to improve the quantification of [11C]PIB PET/MR brain myelin imaging in Multiple Sclerosis (MS) patients. The algorithms validation was performed using either phantom data or clinical images of human controls. The main result of this work is that application of the PVC methodology resulted in a very significant gain in image resolution without any detectable increase of image noise. Lesions sharpness and detectability improved as well with a resulting increase in quantification accuracy. The algorithms developed and presented in this thesis proved to be straightforward tools to improve PET quantification in routine clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
19

Stevens, John Davenport. "Detection of short transients in colored noise by multiresolution analysis." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA378873.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, March 2000.
Thesis advisor(s): Cristi, Roberto ; Fargues, Monique P. "March 2000." Includes bibliographical references (p. 171-172). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
20

Kui, Zhiqing. "On the Construction of Multiresolution Analysis Compatible with General Subdivisions." Thesis, Ecole centrale de Marseille, 2018. http://www.theses.fr/2018ECDM0002/document.

Full text
Abstract:
Les schémas de subdivision sont largement utilisés pour la génération rapide de courbes ou de surfaces. Des développements récents ont produit des schémas variés, en particulier non-linéaires, non-interpolants ou non-homogènes.Pour pouvoir être utilisés en compression, analyse ou contrôle de données, ces schémas de subdivision doivent être incorporés dans une analyse multiresolution qui, imitant les analyses en ondelettes, fournit une décomposition multi-échelle d'un signal, d'une courbe ou d'une surface. Les ingrédients nécessaires à la définition d'une analyse multiresolution associée à un schéma de subdivision sont des schémas de décimation et de détails. Leur construction est facile quand le schéma de multiresolution est interpolant.Cette thèse est consacrée à la construction de schémas de décimation et de détails compatibles avec un schéma de subdivision le plus général possible. Nous commençons par une construction générique dans le cas d'opérateurs homogènes (mais pas interpolants) puis nous généralisons à des situations non-homogènes et non-linéaires. Nous construisons ainsi des analyses multiresolutions compatibles avec de nombreux schémas récemment développés. L'analyse des performances des analyses ainsi construitesest effectuée. Nous présentons des applications numériques en compression d'images
Subdivision schemes are widely used for rapid curve or surface generation. Recent developments have produced various schemes, in particular non-linear, non-interpolatory or non-uniform.To be used in compression, analysis or control of data, subdivision schemes should be incorporated in a multiresolution analysis that, mimicking wavelet analyses, provides a multi-scale decomposition of a signal, a curve, or a surface. The ingredients needed to define a multiresolution analysis associated with a subdivision scheme are decimation scheme and detail operators. Their construction is straightforward when the multiresolution scheme is interpolatory.This thesis is devoted to the construction of decimation schemes and detail operators compatible with general subdivision schemes. We start with a generic construction in the uniform (but not interpolatory) case and then generalize to non-uniform and non-linear situations. Applying these results, we build multiresolution analyses that are compatible with many recently developed schemes. Analysis of the performances of the constructed analyses is carried out. We present numerical applications in image compression
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Sang-Mook. "Wavelet-Based Multiresolution Surface Approximation from Height Fields." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/26203.

Full text
Abstract:
A height field is a set of height distance values sampled at a finite set of sample points in a two-dimensional parameter domain. A height field usually contains a lot of redundant information, much of which can be removed without a substantial degradation of its quality. A common approach to reducing the size of a height field representation is to use a piecewise polygonal surface approximation. This consists of a mesh of polygons that approximates the surfaces of the original data at a desired level of accuracy. Polygonal surface approximation of height fields has numerous applications in the fields of computer graphics and computer vision. Triangular mesh approximations are a popular means of representing three-dimensional surfaces, and multiresolution analysis (MRA) is often used to obtain compact representations of dense input data, as well as to allow surface approximations at varying spatial resolution. Multiresolution approaches, particularly those moving from coarse to fine resolutions, can often improve the computational efficiency of mesh generation as well as can provide easy control of level of details for approximations. This dissertation concerns the use of wavelet-based MRA methods to produce a triangular-mesh surface approximation from a single height field dataset. The goal of this study is to obtain a fast surface approximation for a set of height data, using a small number of approximating elements to satisfy a given error criterion. Typically, surface approximation techniques attempt to balance error of fit, number of approximating elements, and speed of computation. A novel aspect of this approach is the direct evaluation of wavelet coefficients to assess surface shape characteristics within each triangular element at a given scale. Our approach hierarchically subdivides and refines triangles as the resolution level increases.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

VOLONTÈ, ELENA. "Subdivision Schemes for Curve Design and Image Analysis." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/199209.

Full text
Abstract:
Uno schema di suddivisione è un processo iterativo che a partire da alcuni punti produce in poche iterazioni una funzione continua. È un potente mezzo per rappresentare funzioni in campi come la grafica computerizzata e l'analisi di immagini. Il mio lavoro di tesi si è concentrato su due aspetti: lo studio della regolarità della curva limite e l'utilizzo degli schemi di suddivisione per l'analisi di segnali anisotropici. Essendo un processo iterativo è naturale chiedersi quando uno schema converga. Per questo motivo la ricerca si è concentrata nel trovare condizioni affinché la curva limite abbia una certa regolarità. Uno schema si dice lineare se i nuovi punti sono una combinazione lineare dei punti al passo precedente. Per questo tipo di schemi si conoscono molti metodi per studiarne la regolarità; diverso è per schemi non lineari, cioè schemi in cui i nuovi punti dipendono non linearmente dai punti del passo precedente. Tali schemi non sono molto utilizzati, ma sono interessanti perché permettono che la curva limite abbia certe caratteristiche geometriche. In letteratura la regolarità degli schemi non lineari è stata dimostrata con prove ad hoc o si è data solo una evidenza numerica. Il primo lavoro che ha cercato di colmare questa lacuna è di Dyn e Hormann; hanno presentato delle condizioni sufficienti perché uno schema interpolatorio sia convergente e generi una curva limite con tangente continua. L'idea è di studiare la regolarità considerando quantità geometriche come tangenti e curvature. Dyn e Hormann dimostrano che se le serie delle massime lunghezze dei segmenti e dei massimi degli angoli convergono, allora la curva limite ha tangente continua. Lo scopo del mio lavoro è di trovare una condizione per cui la curva limite abbia curvatura continua. La curvatura è il reciproco del raggio del cerchio osculatore, dove quest'ultimo è la posizione limite dei cerchi passanti per tre punti q, p e r quando q e r convergono a p. Abbiamo così considerato le curvature dei cerchi passanti per tre punti e la differenza di curvature tra cerchi vicini. Il nucleo del mio lavoro è stato dimostrare che se la serie di queste differenze è convergente allora la curva limite ha curvatura continua. La dimostrazione è stata supportata da diversi esempi numerici. Gli schemi di suddivisione possono essere utilizzati anche nell'analisi di immagini. Un problema cruciale è quello di rappresentare segnali anisotropici individuandone i contorni. Le ondine, essendo intrinsecamente isotropiche, non riescono a rappresentarli bene. Per questo motivo sono state studiate delle trasformate direzionali tra cui spiccano le shearlet perché permettono di definire schemi di suddivisione multipli, dove ad ogni iterazione è possibile scegliere una matrice espansiva e uno schema di suddivisione da un certo insieme. Le matrici espansive sono responsabili del raffinamento, per questo richiediamo che siano congiuntamente espansive. Per avere, invece, una opportuna trasformata direzionale, si chiede che le matrici soddisfino la proprietà di slope resolution. Le matrici usate nelle shearlet sono il prodotto di una matrice diagonale e una matrice di pseudo rotazione detta shear. Queste matrici hanno determinante relativamente alto che condiziona il costo computazionale. Nel mio lavoro abbiamo perciò proposto, per ogni dimensione, una famiglia di matrici, prodotto di una matrice diagonale e una shear, con determinante minore rispetto alle shearlet. Abbiamo mostrato che le nuove matrici soddisfano tutte le proprietà richieste. Si può quindi definire una trasformata direzionale di cui si sono testate le performance con alcune immagini. Per dimensione d=3 abbiamo studiato la possibilità di ridurre il determinante rilassando la struttura delle matrici.
Subdivision schemes are able to produce functions, which are smooth up to pixel accuracy, in few steps of an iterative process. Therefore, they are a powerful tool for displaying functions in computer graphics and signal analysis. My thesis focuses on two of these applications: one by studying the regularity of the limit curve, and one by analysing anisotropic phenomena in image processing. Being an iterative process, the first inquiry about a subdivision scheme is if it converges. For this reason a lot of research on subdivision schemes concerns the smoothness of the limit curve. Linear schemes are well investigated in this sense, because the new points are a linear combination of points from the previous iteration. Instead in non-linear schemes, the new points depend in a non-linear way on the points from the previous step. This type of schemes are not generally exploited but they are an interesting tool to guarantee some geometric properties of the limit curve. For non-linear schemes, only ad hoc proof or a numerical evidence of the regularity are provided. The first paper that covers this lack of studies is by Dyn and Hormann, where they give sufficient conditions on the convergence and tangent continuity of an interpolatory scheme. The peculiarity of their approach is to consider entities like tangents and curvatures. The summability of the sequences of maxima edge lengths and angles ensure that the limit curve is tangent continuous. The aim of my work is to find a geometric condition for the curvature continuity of the limit curve. The curvature is the reciprocal of the radius of the osculating circle which is the limit of the circles passing through three point q, p, r when q and r converge to p. So, we consider the curvatures of the circles passing through three consecutive points and the difference of neighbouring discrete curvatures. The core of my work is to prove that the summability of this sequence is sufficient to generate a curvature continuous limit curve. The proof is supported by some numerical examples. Subdivision schemes can also be used in image analysis. Due to their relation with a multiresolution analysis, interpolatory subdivision schemes allow to analyse a signal. One crucial issue is the analysis of anisotropic signals with the aim to catch directional edges. When dealing with anisotropic phenomena, wavelets do not provide optimally representations. For this reason directional transforms were introduced. Among them, the shearlet transform is interesting because it provides the general approach of multiple subdivision scheme where in each iteration an expanding matrix and a subdivision scheme are chosen in a finite set. The expanding matrices are responsible for the refinement: in this sense we require that they are jointly expanding. In order to define a directional transform it is crucial that the expanding matrices satisfy the slope resolution property. In the shearlets case the expanding matrices considered are the product of a diagonal matrix and a pseudo rotation matrix called shear. The drawback is that the diagonal matrix considered has large determinant that leads to a quite substantial complexity in implementations. In my work I overcome this problem by proposing, for any dimension, a family of matrices, product of an anisotropic diagonal matrix and a shear matrix, whose determinant is considerable lower than the shearlet case. We prove that the elements of this family satisfy all the prescribed properties. In this sense we are able to define a directional transform and we test the performance with some images. For dimension d=3, we also study the possibility to reduce even more the determinant by relaxing the structure of the matrix.
APA, Harvard, Vancouver, ISO, and other styles
23

Bowman, Kevin W. "Application of wavelets to adaptive optics and multiresolution wiener filtering." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/14920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ganesan, Rajesh. "Process monitoring and feedback control using multiresolution analysis and machine learning." [Tampa, Fla] : University of South Florida, 2005. http://purl.fcla.edu/usf/dc/et/SFE0001248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sazish, Abdul Naser. "Efficient architectures and power modelling of multiresolution analysis algorithms on FPGA." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/6290.

Full text
Abstract:
In the past two decades, there has been huge amount of interest in Multiresolution Analysis Algorithms (MAAs) and their applications. Processing some of their applications such as medical imaging are computationally intensive, power hungry and requires large amount of memory which cause a high demand for efficient algorithm implementation, low power architecture and acceleration. Recently, some MAAs such as Finite Ridgelet Transform (FRIT) Haar Wavelet Transform (HWT) are became very popular and they are suitable for a number of image processing applications such as detection of line singularities and contiguous edges, edge detection (useful for compression and feature detection), medical image denoising and segmentation. Efficient hardware implementation and acceleration of these algorithms particularly when addressing large problems are becoming very chal-lenging and consume lot of power which leads to a number of issues including mobility, reliability concerns. To overcome the computation problems, Field Programmable Gate Arrays (FPGAs) are the technology of choice for accelerating computationally intensive applications due to their high performance. Addressing the power issue requires optimi- sation and awareness at all level of abstractions in the design flow. The most important achievements of the work presented in this thesis are summarised here. Two factorisation methodologies for HWT which are called HWT Factorisation Method1 and (HWTFM1) and HWT Factorasation Method2 (HWTFM2) have been explored to increase number of zeros and reduce hardware resources. In addition, two novel efficient and optimised architectures for proposed methodologies based on Distributed Arithmetic (DA) principles have been proposed. The evaluation of the architectural results have shown that the proposed architectures results have reduced the arithmetics calculation (additions/subtractions) by 33% and 25% respectively compared to direct implementa-tion of HWT and outperformed existing results in place. The proposed HWTFM2 is implemented on advanced and low power FPGA devices using Handel-C language. The FPGAs implementation results have outperformed other existing results in terms of area and maximum frequency. In addition, a novel efficient architecture for Finite Radon Trans-form (FRAT) has also been proposed. The proposed architecture is integrated with the developed HWT architecture to build an optimised architecture for FRIT. Strategies such as parallelism and pipelining have been deployed at the architectural level for efficient im-plementation on different FPGA devices. The proposed FRIT architecture performance has been evaluated and the results outperformed some other existing architecture in place. Both FRAT and FRIT architectures have been implemented on FPGAs using Handel-C language. The evaluation of both architectures have shown that the obtained results out-performed existing results in place by almost 10% in terms of frequency and area. The proposed architectures are also applied on image data (256 £ 256) and their Peak Signal to Noise Ratio (PSNR) is evaluated for quality purposes. Two architectures for cyclic convolution based on systolic array using parallelism and pipelining which can be used as the main building block for the proposed FRIT architec-ture have been proposed. The first proposed architecture is a linear systolic array with pipelining process and the second architecture is a systolic array with parallel process. The second architecture reduces the number of registers by 42% compare to first architec-ture and both architectures outperformed other existing results in place. The proposed pipelined architecture has been implemented on different FPGA devices with vector size (N) 4,8,16,32 and word-length (W=8). The implementation results have shown a signifi-cant improvement and outperformed other existing results in place. Ultimately, an in-depth evaluation of a high level power macromodelling technique for design space exploration and characterisation of custom IP cores for FPGAs, called func-tional level power modelling approach have been presented. The mathematical techniques that form the basis of the proposed power modeling has been validated by a range of custom IP cores. The proposed power modelling is scalable, platform independent and compares favorably with existing approaches. A hybrid, top-down design flow paradigm integrating functional level power modelling with commercially available design tools for systematic optimisation of IP cores has also been developed. The in-depth evaluation of this tool enables us to observe the behavior of different custom IP cores in terms of power consumption and accuracy using different design methodologies and arithmetic techniques on virous FPGA platforms. Based on the results achieved, the proposed model accuracy is almost 99% true for all IP core's Dynamic Power (DP) components.
APA, Harvard, Vancouver, ISO, and other styles
26

Calway, Andrew David. "The multiresolution Fourier transform : a general purpose tool for image analysis." Thesis, University of Warwick, 1989. http://wrap.warwick.ac.uk/49949/.

Full text
Abstract:
The extraction of meaningful features from an image forms an important area of image analysis. It enables the task of understanding visual information to be implemented in a coherent and well defined manner. However, although many of the traditional approaches to feature extraction have proved to be successful in specific areas, recent work has suggested that they do not provide sufficient generality when dealing with complex analysis problems such as those presented by natural images. This thesis considers the problem of deriving an image description which could form the basis of a more general approach to feature extraction. It is argued that an essential property of such a description is that it should have locality in both the spatial domain and in some classification space over a range of scales. Using the 2-d Fourier domain as a classification space, a number of image transforms that might provide the required description are investigated. These include combined representations such as a 2-d version of the short-time Fourier transform (STFT), and multiscale or pyramid representations such as the wavelet transform. However, it is shown that these are limited in their ability to provide sufficient locality in both domains and as such do not fulfill the requirement for generality. To overcome this limitation, an alternative approach is proposed in the form of the multiresolution Fourier transform (MFT). This has a hierarchical structure in which the outermost levels are the image and its discrete Fourier transform (DFT), whilst the intermediate levels are combined representations in space and spatial frequency. These levels are defined to be optimal in terms of locality and their resolution is such that within the transform as a whole there is a uniform variation in resolution between the spatial domain and the spatial frequency domain. This ensures that locality is provided in both domains over a range of scales. The MFT is also invertible and amenable to efficient computation via familiar signal processing techniques. Examples and experiments illustrating its properties are presented. The problem of extracting local image features such as lines and edges is then considered. A multiresolution image model based on these features is defined and it is shown that the MET provides an effective tool for estimating its parameters.. The model is also suitable for representing curves and a curve extraction algorithm is described. The results presented for synthetic and natural images compare favourably with existing methods. Furthermore, when coupled with the previous work in this area, they demonstrate that the MFT has the potential to provide a basis for the solution of general image analysis problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Pearson, Edward R. S. "The multiresolution Fourier transform and its application to polyphonic audio analysis." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/35769/.

Full text
Abstract:
Many people listen to, or at least hear, some form of music almost every day of their lives. However, only some of the processes involved in creating the sensations and emotions evoked by the music are understood in any detail. The problem of unravelling these processes has been much less thoroughly investigated than the comparable topics of speech and image recognition; this has almost certainly been caused by the existence of a greater number of applications awaiting this knowledge. Nevertheless, the area of music perception has attracted some attention over the last few decades and there is an increasing interest in the subject largely arising from the availability of suitably powerful technology. It is becoming feasible to use such technology to construct artificial hearing devices which attempt to reproduce the functionality of the human auditory system. The construction of such devices is both a powerful method of verifying operational theories of the human auditory system and may ultimately provide a means of analysing music in more detail than man. In addition to the analytical benefits, techniques developed in this manner are readily applicable to the creative aspects of music, such as the composition of new music and musical sounds.
APA, Harvard, Vancouver, ISO, and other styles
28

Adjed, Faouzi. "Skin cancer segmentation and detection using total variation and multiresolution analysis." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLE042/document.

Full text
Abstract:
Les décès du cancer de la peau sont majoritairement des mélanomes malins. Il est considéré comme l’un des plus dangereux cancer. A ses débuts, les mélanomes malins sont traités avec des simples biopsies et sont complètement curable. Pour cela, une détection précoce est la meilleure solution pour réduire ses conséquences désastreuses. Imagerie médicale telle que la dermoscopie et les caméras à images standard sont les outils disponibles les plus adaptées pour diagnostiquer précocement les mélanomes. Le diagnostic assisté par ordinateur (CAD) est développé dans le but d’accompagner les radiologistes dans la détection et le diagnostic. Cependant, il y a un besoin d’améliorer la précision de la segmentation et de détection des lésions. Dans ce travail, le modèle de Chan et Vese a été adapté pour segmenter davantage les variations à l’intérieur des lésions avec un résultats très encouragent. La deuxième tâche consiste à extraire des caractéristiques afin de discriminer les mélanomes. Deux méthodes ont été développée, une se basant sur l’irrégularité des bords des lésions et l’autre par la fusion des caractéristiques texturales et structurelles. Les résultats ont montrés de bonnes performances avec une précision de 86.54% et de 86.07%, respectivement
The vast majority of skin cancer deaths are due to malignant melanoma. It is considered as one of the most dangerous cancers. In its early stages, malignant melanoma is completely curable with a simple biopsy. Therefore, an early detection is the best solution to improve skin cancer prognostic. Medical imaging such as dermoscopy and standard camera images are the most suitable tools available to diagnose melanoma at early stages. To help radiologists in the diagnosis of melanoma cases, there is a strong need to develop computer aided diagnosis (CAD) systems. The accurate segmentation and classification of pigment skin lesions still remains a challenging task due to the various colors and structures developed randomly inside the lesions. The current work focused on two main tasks. Inthe first task, a new approach of the segmentation of skin lesions based on Chan and Vese model is developed. The model is adapted to segment the variations of the pigment inside the lesion and not only the main border. The subjective evaluation, applied on a database of standard camera images, obtained a very encouraging results with 97.62% of true detection rate. In the second main task, two feature extraction methods were developed for the analysis of standard camera and dermoscopy images. The method developed for the standard camera skin cancer images is based on border irregularities, introducing two new concepts, which are valleys and crevasses as first and second level of the border irregularity. The method has been implemented on DermIs and DermQues, two databases of standard camera images, and achieved an accuracy of 86.54% with a sensitivity of 80% and a specificity of 95.45%. The second method consisted of a fusion of structural and textural features. The structural features were extracted from wavelet and curvelet coefficients, while the textural features were obtained from the local binary pattern operator. The method has been implemented on the PH2 database for dermoscopy images with 1000-random sampling cross validation. The obtained results achieved an accuracy, a sensitivity and a specificity of 86:07%, 78.93% and 93.25%. Compared to the existing methods, the proposed methods in this work show very good performances
APA, Harvard, Vancouver, ISO, and other styles
29

Souza, Eniuce Menezes de [UNESP]. "Análise de wavelets para detecção e correção do multicaminho no posicionamento relativo GNSS estático e cinemático." Universidade Estadual Paulista (UNESP), 2008. http://hdl.handle.net/11449/86803.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:25Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-11-21Bitstream added on 2014-06-13T20:09:30Z : No. of bitstreams: 1 souza_em_dr_prud.pdf: 3668689 bytes, checksum: aa9f8f193e7b43db7e93ccc3d9a0428b (MD5)
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
O multicaminho é um dos fenômenos que ocorre quando o sinal proveniente do Global Navigation Satellite System (GNSS) reflete em objetos localizados nas proximidades do levantamento e chega ao receptor via múltiplos caminhos. Geralmente, o receptor GNSS recebe além do sinal direto, também o sinal refletido, o qual é atrasado em relação ao sinal direto. Conseqüentemente, as medidas de pseudodistância (PD) e fase de batimento da onda portadora são rastreadas para um sinal composto, e não para o sinal direto, causando o erro do multicaminho. Esse efeito é uma fonte de erro significativa que ainda permanece como um desafio para a pesquisa, especialmente para o posicionamento relativo estático e cinemático em aplicações de alta precisão. Diferentemente dos demais erros, o multicaminho não é atenuado quando se formam as duplas diferenças (DD) em uma linha de base curta, por ser um efeito altamente dependente do local do levantamento. Pelo contrário, os erros de multicaminho podem aumentar no processo de dupla diferenciação. Nessa pesquisa foi proposta uma metodologia, viável em termos práticos e econômicos, capaz de detectar e corrigir o efeito do multicaminho nas observações de fase da onda portadora e PD L1 e/ou L2 para aplicações estáticas e cinemáticas, quer sejam pós-processadas ou em tempo real. Essa metodologia é baseada na Análise de Multirresolução (AMR) utilizando a Transformada de Wavelets (TW). A TW é aplicada para decompor as séries temporais dos resíduos das DDs do ajustamento em componentes de freqüências baixa e alta...
GNSS-multipath is a phenomenon that occurs when the signal from Global Navigation Satellite System (GNSS) reflects on objects surrounding the survey environment and reaches the receiver antenna through multiple paths. Usually, the GNSS receiver also collects the reflected signal, which is delayed in relation to the direct one. Consequently, the pseudorange (code) and carrier phase measurements are tracked for a composed signal, and not for the direct signal, causing a multipath error. This effect is a significant error source that still remains as a challenge for the research, especially for static and kinematic relative positioning in high-precision applications. Differently from other errors sources, multipath is not attenuated when the double differences (DD) are formed in a short baseline, because this error is highly dependent upon the surrounding environment. On the contrary, multipath errors can even increase in the double differentiation process. In this research a feasible and economic methodology, able of detecting and correcting the multipath effect from the carrier phase and pseudorange L1 and/or L2 for static and kinematic applications, post-processed or in real time. This approach is based on the Multiresolution Analysis (MRA) using the Wavelet Transform (WT)... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
30

Sen, Aykut. "Optimizable Multiresolution Quadratic Variation Filter For High-frequency Financial Data." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610433/index.pdf.

Full text
Abstract:
As the tick-by-tick data of financial transactions become easier to reach, processing that much of information in an efficient and correct way to estimate the integrated volatility gains importance. However, empirical findings show that, this much of data may become unusable due to microstructure effects. Most common way to get over this problem is to sample the data in equidistant intervals of calendar, tick or business time scales. The comparative researches on that subject generally assert that, the most successful sampling scheme is a calendar time sampling which samples the data every 5 to 20 minutes. But this generally means throwing out more than 99 percent of the data. So it is obvious that a more efficient sampling method is needed. Although there are some researches on using alternative techniques, none of them is proven to be the best. Our study is concerned with a sampling scheme that uses the information in different scales of frequency and is less prone to microstructure effects. We introduce a new concept of business intensity, the sampler of which is named Optimizable Multiresolution Quadratic Variation Filter. Our filter uses multiresolution analysis techniques to decompose the data into different scales and quadratic variation to build up the new business time scale. Our empirical findings show that our filter is clearly less prone to microstructure effects than any other common sampling method. We use the classified tick-by-tick data for Turkish Interbank FX market. The market is closed for nearly 14 hours of the day, so big jumps occur between closing and opening prices. We also propose a new smoothing algorithm to reduce the effects of those jumps.
APA, Harvard, Vancouver, ISO, and other styles
31

SILVA, AUCYONE A. da. "An integrated approach for plant monitoring and diagnosis using multiresolution wavelet analysis." reponame:Repositório Institucional do IPEN, 1997. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11643.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:54:15Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:09:13Z (GMT). No. of bitstreams: 1 12438.pdf: 5594991 bytes, checksum: f79284c9b5ba64cbc05b0ee1eb78ef64 (MD5)
Tese (Doutoramento)
IPEN/T
The University of Tennessee
APA, Harvard, Vancouver, ISO, and other styles
32

Fujii, Masafumi. "A time-domain Haar-wavelet-based multiresolution technique for electromagnetic field analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0006/NQ40454.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Al, Zu'bi Shadi Mahmoud. "3D multiresolution statistical approaches for accelerated medical image and volume segmentation." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5300.

Full text
Abstract:
Medical volume segmentation got the attraction of many researchers; therefore, many techniques have been implemented in terms of medical imaging including segmentations and other imaging processes. This research focuses on an implementation of segmentation system which uses several techniques together or on their own to segment medical volumes, the system takes a stack of 2D slices or a full 3D volumes acquired from medical scanners as a data input. Two main approaches have been implemented in this research for segmenting medical volume which are multi-resolution analysis and statistical modeling. Multi-resolution analysis has been mainly employed in this research for extracting the features. Higher dimensions of discontinuity (line or curve singularity) have been extracted in medical images using a modified multi-resolution analysis transforms such as ridgelet and curvelet transforms. The second implemented approach in this thesis is the use of statistical modeling in medical image segmentation; Hidden Markov models have been enhanced here to segment medical slices automatically, accurately, reliably and with lossless results. But the problem with using Markov models here is the computational time which is too long. This has been addressed by using feature reduction techniques which has also been implemented in this thesis. Some feature reduction and dimensionality reduction techniques have been used to accelerate the slowest block in the proposed system. This includes Principle Components Analysis, Gaussian Pyramids and other methods. The feature reduction techniques have been employed efficiently with the 3D volume segmentation techniques such as 3D wavelet and 3D Hidden Markov models. The system has been tested and validated using several procedures starting at a comparison with the predefined results, crossing the specialists’ validations, and ending by validating the system using a survey filled by the end users explaining the techniques and the results. This concludes that Markovian models segmentation results has overcome all other techniques in most patients’ cases. Curvelet transform has been also proved promising segmentation results; the end users rate it better than Markovian models due to the long time required with Hidden Markov models.
APA, Harvard, Vancouver, ISO, and other styles
34

Kocak, Umut, Palmerius Karljohan Lundin, and Matthew Cooper. "An Error Analysis Model for Adaptive Deformation Simulation." Linköpings universitet, Medie- och Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79904.

Full text
Abstract:
With the widespread use of deformation simulations in medical applications, the realism of the force feedback has become an important issue. In order to reach real-time performance with sufficient realism the approach of adaptivity, solution of different parts of the system with different resolutions and refresh rates, has been commonly deployed. The change in accuracy resulting from the use of adaptivity, however, has been been paid scant attention in the deformation simulation field. Presentation of error metrics is rare, while more focus is given to the real-time stability. We propose an abstract pipeline to perform error analysis for different types of deformation techniques which can consider different simulation parameters. A case study is also performed using the pipeline, and the various uses of the error estimation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Jain, Sachin. "Multiresolution strategies for the numerical solution of optimal control problems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22656.

Full text
Abstract:
Thesis (Ph. D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Tsiotras, Panagiotis; Committee Member: Calise, Anthony J.; Committee Member: Egerstedt, Magnus; Committee Member: Prasad, J. V. R.; Committee Member: Russell, Ryan P.; Committee Member: Zhou, Hao-Min.
APA, Harvard, Vancouver, ISO, and other styles
36

Lumbreras, Ruiz Felipe. "Segmentation, classification and modelization of textures by means of multiresolution decomposition techniques." Doctoral thesis, Universitat Autònoma de Barcelona, 2001. http://hdl.handle.net/10803/3024.

Full text
Abstract:
El análisis de texturas es un área de estudio interesante con suficiente peso específico dentro de los diferentes campos que componen la visión por ordenador. En este trabajo hemos desarrollado métodos específicos para resolver aspectos importantes de dicha área. El primer acercamiento al tema viene de la mano de un problema de segmentación de un tipo de texturas muy concreto como son las imágenes microscópicas de láminas de mármol. Este primer tipo de imágenes se componen de un conjunto de granos cuyas formas y tamaños sirven a los especialistas para identificar, catalogar y determinar el origen de dichas muestras. Identificar y analizar los granos que componen tales imágenes de manera individual necesita de una etapa de segmentación. En esencia, esto implica la localización de las fronteras representadas en este caso por valles que separan zonas planas asociadas a los granos. De los diferentes métodos estudiados para la detección de dichos valles y para el caso concreto de imágenes petrográficas son los basados en técnicas de morfología matemática los que han dado mejores resultados. Además, la segmentación requiere un filtrado previo para el que se han estudiado nuevamente un conjunto de posibilidades entre las que cabe destacar los algoritmos multirresolución basados en wavelets.

El segundo problema que hemos atacado en este trabajo es la clasificación de imágenes de textura. En él también hemos utilizado técnicas multirresolución como base para su resolución. A diferencia de otros enfoques de carácter global que encontramos extensamente en la literatura sobre texturas, nos hemos centrado en problemas donde las diferencias visuales entre las clases de dichas texturas son muy pequeñas. Y puesto que no hemos establecido restricciones fuertes en este análisis, las estrategias desarrolladas son aplicables a un extenso espectro de texturas, como pueden ser las baldosas cerámicas, las imágenes microscópicas de pigmentos de efecto, etc.

El enfoque que hemos seguido para la clasificación de texturas implica la consecución de una serie de pasos. Hemos centrado nuestra atención en aquellos pasos asociados con las primeras etapas del proceso requeridas para identificar las características importantes que definen la textura, mientras que la clasificación final de las muestras ha sido realizada mediante métodos de clasificación generales. Para abordar estos primeros pasos dentro del análisis hemos desarrollado una estrategia mediante la cual las características de una imagen se ajustan a un modelo que previamente hemos definido, uno de entre varios modelos que están ordenados por complejidad. Estos modelos están asociados a algoritmos específicos y sus parámetros así como a los cálculos que de ellos se derivan. Eligiendo el modelo adecuado, por tanto, evitamos realizar cálculos que no nos aportan información útil para la clasificación.

En un tercer enfoque hemos querido llegar a una descripción de textura que nos permita de forma sencilla su clasificación y su síntesis. Para conseguir este objetivo hemos adoptado por un modelo probabilístico. Dicha descripción de la textura nos permitirá la clasificación a través de la comparación directa de modelos, y también podremos, a partir del modelo probabilístico, sintetizar nuevas imágenes.

Para finalizar, comentar que en las dos líneas de trabajo que hemos expuesto, la segmentación y la clasificación de texturas, hemos llegado a soluciones prácticas que han sido evaluadas sobre problemas reales con éxito y además las metodologías propuestas permiten una fácil extensión o adaptación a nuevos casos. Como líneas futuras asociadas a estos temas trataremos por un lado de adaptar la segmentación a imágenes que poco o nada tienen que ver con las texturas, en las que se perseguirá la detección de sujetos y objetos dentro de escenas, como apuntamos más adelante en esta misma memoria. Por otro lado, y relacionado con la clasificación, abordaremos un problema todavía sin solución como es el de la ingeniería inversa en pigmentos de efecto, en otras palabras la determinación de los constituyentes en pinturas metalizadas, y en el que utilizaremos los estudios aquí presentados como base para llegar a una posible solución.
An interesting problem in computer vision is the analysis of texture images. In this work, we have developed specific methods to solve important aspects of this problem. The first approach involves segmentation of a specific type of textures, i.e. those of microscopy images of thin marble sections. These images comprise a pattern of grains whose sizes and shapes help specialists to identify the origin and quality of marble samples. To identify and analyze individual grains in these images represents a problem of image segmentation. In essence, this involves identifying boundary lines represented by valleys which separate flat areas corresponding to grains. Of several methods tested, we found those based on mathematical morphology particularly successful for segmentation of petrographical images. This involves a pre-filtering step for which again several approaches have been explored, including multiresolution algorithms based on wavelets.

In the second approach we have also used multiresolution analyses to address the problem of classifying texture images. In contrast to more global approaches found in the literature, we have explored situations where visual differences between textures are rather subtle. Since we have tried to impose relatively few restrictions on these analyses, we have developed strategies that are applicable to a wide range of related texture images, such as images of ceramic tiles, microscopic images of effect pigments, etc.

The approach we have used for the classification of texture images involves several technical steps. We have focused our attention in the initial low-level analyses required to identify the general features of the image, whereas the final classification of samples has been performed using generic classification methods. To address the early steps of image analysis, we have developed a strategy whereby the general features of the image fit one of several pre-defined models with increasing levels of complexity. These models are associated to specific algorithms, parameters and calculations for the analysis of the image, thus avoiding calculations that do not provide useful information.

Finally, in a third approach we want to arrive to a description of textures in such a way that it should be able to classify and synthesize textures. To reach this goal we adopt a probabilistic model of the texture. This description of the texture allows us to compare textures through comparison of probabilistic models, and also use those probabilities to generate new similar images.

In conclusion, we have developed strategies of segmentation and classification of textures that provide solutions to practical problems and are potentially applicable with minor modifications to a wide range of situations. Future research will explore (i) the possibility of adapting segmentation to the analysis of images that do not necessarily involve textures, e.g. localization of subjects in scenes, and (ii) classification of effect pigment images to help identify their components.
APA, Harvard, Vancouver, ISO, and other styles
37

Garcia, Emilien. "Approche non-linéaire du monitoring de forage : un espoir de progrès pour la commande en surface ?" Thesis, Ecole centrale de Marseille, 2019. http://www.theses.fr/2019ECDM0001.

Full text
Abstract:
L'activité de forage semble incontournable dans la société moderne, notamment pour subvenir à ses besoins énergétiques toujours croissants. Le coût quotidien d'un forage étant très élevé, il est crucial d'en optimiser la durée. Il est alors primordial d'éviter et prévenir tout incident lors du forage, ce qui passe notamment par le contrôle des efforts appliqués sur l'outil. Ces efforts, le poids sur l'outil WOB (Weight On Bit) et le couple à l'outil TOB (Torque On Bit), permettent aussi d'estimer la MSE (Mechanical Specific Energy), l'énergie nécessaire à l'outil pour forer une unité de volume de roche. L'objectif étant de minimiser cette énergie tout au long du forage, estimer la MSE en temps réel permet de détecter tout dysfonctionnement pendant le forage lorsqu'elle devient trop importante. Néanmoins, quelques milliers de mètres sous la surface terrestre, les efforts subis par l'outil sont rarement mesurés au niveau de l'outil lui-même, et sont encore plus rarement accessibles en temps réel. Si Excellence Logging dispose déjà d'une méthode permettant d’estimer WOB et TOB en temps réel, la validité de cette méthode n'est pas assurée, en particulier lorsque les trajectoires de puits deviennent complexes. Au cours de cette thèse, une fonction de transfert non-linéaire entre les efforts délivrés au train de tiges en surface et ceux qui parviennent à l'outil au fond du puits a ainsi été développée. Elle tient notamment compte des propriétés intrinsèques à la trajectoire du puits foré. Le Chapitre 1 introduit les équations du modèle développé et utilisé au cours de cette thèse, après une étude bibliographique et critique des méthodes existantes pour estimer les efforts WOB et TOB à l'outil. Cette étude inclut notamment la méthode des références, utilisée par Excellence Logging, et les modèles de frottement dits Torque and Drag (ou T&D). Ce même chapitre inclut également l'étude critique des méthodes existantes pour estimer la trajectoire des puits de forage. Une étude comparative des différentes méthodes de reconstruction de la trajectoire y est réalisée dans le but de déterminer laquelle permet la meilleure estimation des dérivées de la trajectoire, indispensables à l'utilisation du modèle de frottement. Le Chapitre 2 est consacré à la présentation et l'analyse mathématique d'une méthode de lissage non-linéaire de la trajectoire d'un puits afin d'en améliorer l'estimation de ses dérivées. Après une introduction à l'analyse multirésolution, la méthode de lissage est présentée et analysée. Elle garantit la convergence simultanée d'un polygone et de ses différences divisées vers une fonction régulière et ses dérivées. Le processus de lissage est ensuite validé sur une courbe théorique, puis appliqué sur une trajectoire réelle avec des mesures de surface idéales. Le Chapitre 3 est ensuite consacré à l'amélioration de l'estimation de la tension des tiges en surface. Elle est calculée à partir de la tension du brin mort du système de levage, affectée par des frottements au niveau des poulies lors des montées et descentes des tiges. Après une étude bibliographique des différents modèles existants permettant d'effectuer correctement la conversion, une méthode permettant de convertir en temps réel la mesure en tension des tiges en surface est présentée, puis appliquée à un jeu de données pour validation. Les méthodes développées dans les Chapitres 2 et 3 sont ensuite confrontées à plusieurs applications sur des jeux de données de puits réels au cours du Chapitre 4. Les efforts à l'outil y sont estimés sur certaines phases de forage de ces puits et comparés avec les estimations fournies par la méthode des références, à défaut de disposer de mesures d'efforts à l'outil.Enfin, le Chapitre 5 vient conclure cette thèse, en faisant le point sur les développements et les avancées réalisés, ainsi que les limites et pistes d'amélioration de la fonction de transfert mise en place
Drilling is a central activity in our modern society to provide enough energy while the needs are constantly growing. Related daily costs are very high, and therefore it is critical to optimize drilling durations, which can be realized by preventing any drilling malfunction. Controlling the efforts applied on the drilling bit is one way to check that the optimal conditions while drilling are gathered. These efforts, called Weight On Bit (WOB) and Torque On Bit (TOB), also allow to estimate the Mechanical Specific Energy (MSE), which is the energy required by the bit to drill one unit volume of rock. The objective is to minimize such energy all along the drilling. Then it is important to estimate the MSE in real-time to detect any malfunction while drilling if the MSE becomes too high. However, the bit is usually thousands of meters distant from the drilling floor, which makes it difficult to measure the efforts endured by the bit directly near the bit, and even more difficult to monitor them in real-time at the surface. To control them, it is necessary to estimate them by another way, using the only useful and available measurements at the surface. Excellence Logging already developed a method to estimate WOB and TOB in real-time, but the validity of this method is not ensured, in particular when the wellbore trajectory is complex. Throughout this thesis, a nonlinear transfer function has then been developed to estimate the bottom efforts transmitted to the bit from the surface efforts provided to the drillstring, considering the trajectory inherent singularities. Chapter 1 introduces the equations of the friction model used and developed in this thesis, after a bibliographical and critical study of the existing methods to estimate WOB and TOB. This study includes the references method, used by Excellence Logging, as well as the friction models also called ``Torque and Drag'' (or T&D). Chapter 1 also contains the critical study of the existing reconstruction methods to estimate the wellbore trajectory. A comparison of those methods is realized to determine which one allows the best trajectory derivatives estimates, which are essential to properly apply the friction model. Chapter 2 introduces a trajectory smoothing method mathematically analyzed, which provides better spatial derivatives estimates. As the smoothing must consider the singularities of the trajectory to smooth, the method must be nonlinear. After an introduction to Multiresolution Analysis, the smoothing method is then introduced and analyzed. It ensures simultaneous convergences of a polygon and its divided differences towards a regular function and its derivatives. The smoothing process is then validated on a theoretical curve and applied on a real wellbore trajectory considering ideal surface measurements
APA, Harvard, Vancouver, ISO, and other styles
38

Long, Ge. "Wavelets, frequency volume rendering and multiresolution analysis yield fast displays of acquired data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq21047.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chai, Bing-Bing. "Significance-linked connected component analysis and morpho-subband decomposition for multiresolution image compression /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pérez, Patrick. "Champs markoviens et analyse multiresolution de l'image : application a l'analyse du mouvement." Rennes 1, 1993. http://www.theses.fr/1993REN10086.

Full text
Abstract:
L'introduction d'une modelisation statistique par champs de markov a recemment permis des avancees importantes dans nombre de problemes classiques en analyse d'images. Ces modeles sont generalement associes a des algorithmes d'optimisation globale par relaxation qui restent couteux en temps de calcul dans certains applications. Or les techniques multigrilles, par ailleurs classiques en analyse numerique, peuvent conduire a des gains importants sur ce point. Pour l'heure il n'existe cependant pas reellement de support theorique permettant d'associer de facon simple et efficace strategie multigrille et modelisation markovienne. C'est ce probleme qui est ici aborde. Un certain nombre de questions soulevees par la mise en place de modeles markoviens multiresolutions en analyse d'images quelle est la nature du champ obtenu par decimation ou transformation stochastique d'un champ markovien? quelles sont les proprietes statistiques d'un modele multiresolution global a un niveau de resolution donne? par exemple pouvant s'exprimer en termes de restriction d'un champ markovien a une partie de son support initial, nous avons, dans une premiere partie etudie ce probleme theorique dans un contexte tres general: celui de champs markoviens discrets ou continus indexes par les sites de graphes finis simples non orientes. Deux resultats principaux sont etablis: le premier donne des conditions suffisantes pour que le champ restreint soit markovien et le second etablit une hypothese sous laquelle les precedentes conditions deviennent necessaires. Diverses consequences et applications de ces resultats sont ensuite exposees. Nous proposons dans la deuxieme partie une approche multiechelle de la modelisation markovienne reposant sur l'exploration de sous-ensembles contraints emboites de configurations. Mathematiquement coherente et facilement implantable, elle a ete validee sur trois exemples d'analyse du mouvement dans une sequence d'images: la detection du mouvement, la segmentation en zones homogenes de mouvement et la mesure du mouvement. Cela a permis de mettre en evidence les apports de l'approche: acceleration de la convergence et amelioration de l'estimee par rapport aux techniques markoviennes multiresolutions classiques
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Zhan. "Developing a Unified Perspective on the Role of Multiresolution in Machine Intelligence Tasks." Case Western Reserve University School of Graduate Studies / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=case1102085255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

LI, Lingqi. "Studies on Advanced Signal Processing in the Field of Nondestructive Testing Based on Multiresolution Analysis." Kyoto University, 2004. http://hdl.handle.net/2433/148304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Souza, Eniuce Menezes de. "Análise de wavelets para detecção e correção do multicaminho no posicionamento relativo GNSS estático e cinemático /." Presidente Prudente, 2008. http://hdl.handle.net/11449/86803.

Full text
Abstract:
Resumo: O multicaminho é um dos fenômenos que ocorre quando o sinal proveniente do Global Navigation Satellite System (GNSS) reflete em objetos localizados nas proximidades do levantamento e chega ao receptor via múltiplos caminhos. Geralmente, o receptor GNSS recebe além do sinal direto, também o sinal refletido, o qual é atrasado em relação ao sinal direto. Conseqüentemente, as medidas de pseudodistância (PD) e fase de batimento da onda portadora são rastreadas para um sinal composto, e não para o sinal direto, causando o erro do multicaminho. Esse efeito é uma fonte de erro significativa que ainda permanece como um desafio para a pesquisa, especialmente para o posicionamento relativo estático e cinemático em aplicações de alta precisão. Diferentemente dos demais erros, o multicaminho não é atenuado quando se formam as duplas diferenças (DD) em uma linha de base curta, por ser um efeito altamente dependente do local do levantamento. Pelo contrário, os erros de multicaminho podem aumentar no processo de dupla diferenciação. Nessa pesquisa foi proposta uma metodologia, viável em termos práticos e econômicos, capaz de detectar e corrigir o efeito do multicaminho nas observações de fase da onda portadora e PD L1 e/ou L2 para aplicações estáticas e cinemáticas, quer sejam pós-processadas ou em tempo real. Essa metodologia é baseada na Análise de Multirresolução (AMR) utilizando a Transformada de Wavelets (TW). A TW é aplicada para decompor as séries temporais dos resíduos das DDs do ajustamento em componentes de freqüências baixa e alta... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: GNSS-multipath is a phenomenon that occurs when the signal from Global Navigation Satellite System (GNSS) reflects on objects surrounding the survey environment and reaches the receiver antenna through multiple paths. Usually, the GNSS receiver also collects the reflected signal, which is delayed in relation to the direct one. Consequently, the pseudorange (code) and carrier phase measurements are tracked for a composed signal, and not for the direct signal, causing a multipath error. This effect is a significant error source that still remains as a challenge for the research, especially for static and kinematic relative positioning in high-precision applications. Differently from other errors sources, multipath is not attenuated when the double differences (DD) are formed in a short baseline, because this error is highly dependent upon the surrounding environment. On the contrary, multipath errors can even increase in the double differentiation process. In this research a feasible and economic methodology, able of detecting and correcting the multipath effect from the carrier phase and pseudorange L1 and/or L2 for static and kinematic applications, post-processed or in real time. This approach is based on the Multiresolution Analysis (MRA) using the Wavelet Transform (WT)... (Complete abstract click electronic access below)
Orientador: João Francisco Galera Monico
Coorientador: Aylton Pagamisse
Banca: Hélio Magalhães de Oliveira
Banca: Silvio Rogério Correia Freitas
Banca: Messias Meneguette Junior
Banca: Paulo de Oliveira Camargo
Doutor
APA, Harvard, Vancouver, ISO, and other styles
44

Busch, Andrew W. "Wavelet transform for texture analysis with application to document analysis." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15908/1/Andrew_Busch_Thesis.pdf.

Full text
Abstract:
Texture analysis is an important problem in machine vision, with applications in many fields including medical imaging, remote sensing (SAR), automated flaw detection in various products, and document analysis to name but a few. Over the last four decades many techniques for the analysis of textured images have been proposed in the literature for the purposes of classification, segmentation, synthesis and compression. Such approaches include analysis the properties of individual texture elements, using statistical features obtained from the grey-level values of the image itself, random field models, and multichannel filtering. The wavelet transform, a unified framework for the multiresolution decomposition of signals, falls into this final category, and allows a texture to be examined in a number of resolutions whilst maintaining spatial resolution. This thesis explores the use of the wavelet transform to the specific task of texture classification and proposes a number of improvements to existing techniques, both in the area of feature extraction and classifier design. By applying a nonlinear transform to the wavelet coefficients, a better characterisation can be obtained for many natural textures, leading to increased classification performance when using first and second order statistics of these coefficients as features. In the area of classifier design, a combination of an optimal discriminate function and a non-parametric Gaussian mixture model classifier is shown to experimentally outperform other classifier configurations. By modelling the relationships between neighbouring bands of the wavelet trans- form, more information regarding a texture can be obtained. Using such a representation, an efficient algorithm for the searching and retrieval of textured images from a database is proposed, as well as a novel set of features for texture classification. These features are experimentally shown to outperform features proposed in the literature, as well as provide increased robustness to small changes in scale. Determining the script and language of a printed document is an important task in the field of document processing. In the final part of this thesis, the use of texture analysis techniques to accomplish these tasks is investigated. Using maximum a posterior (MAP) adaptation, prior information regarding the nature of script images can be used to increase the accuracy of these methods. Novel techniques for estimating the skew of such documents, normalising text block prior to extraction of texture features and accurately classifying multiple fonts are also presented.
APA, Harvard, Vancouver, ISO, and other styles
45

Busch, Andrew W. "Wavelet Transform For Texture Analysis With Application To Document Analysis." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15908/.

Full text
Abstract:
Texture analysis is an important problem in machine vision, with applications in many fields including medical imaging, remote sensing (SAR), automated flaw detection in various products, and document analysis to name but a few. Over the last four decades many techniques for the analysis of textured images have been proposed in the literature for the purposes of classification, segmentation, synthesis and compression. Such approaches include analysis the properties of individual texture elements, using statistical features obtained from the grey-level values of the image itself, random field models, and multichannel filtering. The wavelet transform, a unified framework for the multiresolution decomposition of signals, falls into this final category, and allows a texture to be examined in a number of resolutions whilst maintaining spatial resolution. This thesis explores the use of the wavelet transform to the specific task of texture classification and proposes a number of improvements to existing techniques, both in the area of feature extraction and classifier design. By applying a nonlinear transform to the wavelet coefficients, a better characterisation can be obtained for many natural textures, leading to increased classification performance when using first and second order statistics of these coefficients as features. In the area of classifier design, a combination of an optimal discriminate function and a non-parametric Gaussian mixture model classifier is shown to experimentally outperform other classifier configurations. By modelling the relationships between neighbouring bands of the wavelet trans- form, more information regarding a texture can be obtained. Using such a representation, an efficient algorithm for the searching and retrieval of textured images from a database is proposed, as well as a novel set of features for texture classification. These features are experimentally shown to outperform features proposed in the literature, as well as provide increased robustness to small changes in scale. Determining the script and language of a printed document is an important task in the field of document processing. In the final part of this thesis, the use of texture analysis techniques to accomplish these tasks is investigated. Using maximum a posterior (MAP) adaptation, prior information regarding the nature of script images can be used to increase the accuracy of these methods. Novel techniques for estimating the skew of such documents, normalising text block prior to extraction of texture features and accurately classifying multiple fonts are also presented.
APA, Harvard, Vancouver, ISO, and other styles
46

Uys, Ernst Wilhelm. "Image compression using the one-dimensional discrete pulse transform." Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/6466.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2011.
ENGLISH ABSTRACT: The nonlinear LULU smoothers excel at removing impulsive noise from sequences and possess a variety of theoretical properties that make it possible to perform a so-called Discrete Pulse Transform, which is a novel multiresolution analysis technique that decomposes a sequence into resolution levels with a large amount of structure, analogous to a Discrete Wavelet Transform. We explore the use of a one-dimensional Discrete Pulse Transform as the central element in a digital image compressor. We depend crucially on the ability of space-filling scanning orders to map the two-dimensional image data to one dimension, sacrificing as little image structure as possible. Both lossless and lossy image compression are considered, leading to five new image compression schemes that give promising results when compared to state-of-the-art image compressors.
AFRIKAANSE OPSOMMING: Die nielineêre LULU gladstrykers verwyder impulsiewe geraas baie goed uit rye en besit verskeie teoretiese eienskappe wat dit moontlik maak om ’n sogenoemde Diskrete Puls Transform uit te voer; ’n nuwe multiresolusie analise tegniek wat ’n ry opbreek in ’n versameling resolusie vlakke wat ’n groot hoeveelheid struktuur bevat, soortgelyk tot ’n Diskrete Golfie Transform. Ons ondersoek of ’n eendimensionele Diskrete Puls Transform as die sentrale element in ’n digitale beeld kompressor gebruik kan word. Ons is afhanklik van ruimtevullende skandeer ordes om die tweedimensionele beelddata om te skakel na een dimensie, sonder om te veel beeld struktuur te verloor. Vyf nuwe beeld kompressie skemas word bespreek. Hierdie skemas lewer belowende resultate wanneer dit met die beste hedendaagse beeld kompressors vergelyk word.
APA, Harvard, Vancouver, ISO, and other styles
47

Anton, Wirén. "The Discrete Wavelet Transform." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55063.

Full text
Abstract:
In this thesis we will explore the theory behind wavelets. The main focus is on the discrete wavelet transform, although to reach this goal we will also introduce the discrete Fourier transform as it allow us to derive important properties related to wavelet theory, such as the multiresolution analysis. Based on the multiresolution it will be shown how the discrete wavelet transform can be formulated and show how it can be expressed in terms of a matrix. In later chapters we will see how the discrete wavelet transform can be generalized into two dimensions, and discover how it can be used in image processing.
APA, Harvard, Vancouver, ISO, and other styles
48

Buch, Alok K. "An online strategy for wavelet based analysis of multiscale sensor data." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hsiao, Chiaolong. "Computational bioinformatics on three-dimensional structures of ribosomes using multiresolutional analysis." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26634.

Full text
Abstract:
Thesis (Ph.D)--Chemistry and Biochemistry, Georgia Institute of Technology, 2009.
Committee Chair: Williams, Loren; Committee Member: Doyle, Donald; Committee Member: Harvey, Stephen; Committee Member: Hud, Nicholas; Committee Member: Wartell, Roger. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
50

Chainais, Pierre. "Cascades log-infiniment divisibles et analyse multiresolution. Application à l'étude des intermittences en turbulence." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2001. http://tel.archives-ouvertes.fr/tel-00001584.

Full text
Abstract:
Les cascades log-infiniment divisibles fournissent un cadre général à l'étude de la propriété d' invariance d'échelle. Nous introduisons ces objets en décrivant l'évolution historique des différents modèles proposés pour décrire le phénomène d'intermittence statistique en turbulence. Nous nous appliquons alors à préciser une définition formelle des cascades log-infiniment divisibles. Nous remplaçons aussi les accroissements, usuels en turbulence, par les coefficients d'une transformée en ondelettes associée à une analyse multirésolution, outil dédié à l'analyse temps-échelle. Une réflexion approfondie sur la signification du formalisme nous amène à démontrer sa flexibilité pour la modélisation, ainsi que sa richesse en lien avec les cascades multiplicatives, les processus de Markov, l'équation de Langevin, l'équation de Fokker-Planck...Grâce à l'étude des cascades log-Poisson composées, nous proposons une vision originale du phénomène d'intermittence statistique. Ensuite, des estimateurs des exposants de lois d'échelle (éventuellement relatives) sont étudiés en insistant sur la correction du biais et la détermination d'intervalles de confiance. Nous les appliquons à des données de télétrafic informatique. Nous expliquons pourquoi une procédure usuelle d'estimation du spectre multifractal appliquée aux mouvements linéaires stables fractionnaires risque de mener à une méprise. Enfin, le lien entre intermittence statistique et intermittence spatio-temporelle (structures cohérentes) en turbulence est étudié à partir de l'enregistrement de signaux de vitesse et de pression conjointement en espace et en temps dans un écoulement turbulent. De fortes dépressions associées à des tourbillons filamentaires sont détectées. Une analyse statistique des coefficients d'ondelette de la vitesse conditionnée à ces événements nous permet de décrire l'influence de ces structures cohérentes à différents nombres de Reynolds.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography