To see the other types of publications on this topic, follow the link: WAVELETE DOMAIN.

Dissertations / Theses on the topic 'WAVELETE DOMAIN'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'WAVELETE DOMAIN.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Souare, Moussa. "Sar Image Analysis In Wavelets Domain." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405014006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Azimifar, Seyedeh-Zohreh. "Image Models for Wavelet Domain Statistics." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/938.

Full text
Abstract:
Statistical models for the joint statistics of image pixels are of central importance in many image processing applications. However the high dimensionality stemming from large problem size and the long-range spatial interactions make statistical image modeling particularly challenging. Commonly this modeling is simplified by a change of basis, mostly using a wavelet transform. Indeed, the wavelet transform has widely been used as an approximate whitener of statistical time series. It has, however, long been recognized that the wavelet coefficients are neither Gaussian, in terms of the marginal statistics, nor white, in terms of the joint statistics. The question of wavelet joint models is complicated and admits for possibilities, with statistical structures within subbands, across orientations, and scales. Although a variety of joint models have been proposed and tested, few models appear to be directly based on empirical studies of wavelet coefficient cross-statistics. Rather, they are based on intuitive or heuristic notions of wavelet neighborhood structures. Without an examination of the underlying statistics, such heuristic approaches necessarily leave unanswered questions of neighborhood sufficiency and necessity. This thesis presents an empirical study of joint wavelet statistics for textures and other imagery including dependencies across scale, space, and orientation. There is a growing realization that modeling wavelet coefficients as independent, or at best correlated only across scales, may be a poor assumption. While recent developments in wavelet-domain Hidden Markov Models (notably HMT-3S) account for within-scale dependencies, we find that wavelet spatial statistics are strongly orientation dependent, structures which are surprisingly not considered by state-of-the-art wavelet modeling techniques. To demonstrate the effectiveness of the studied wavelet correlation models a novel non-linear correlated empirical Bayesian shrinkage algorithm based on the wavelet joint statistics is proposed. In comparison with popular nonlinear shrinkage algorithms, it improves the denoising results.
APA, Harvard, Vancouver, ISO, and other styles
3

Temizel, Alptekin. "Wavelet domain image resolution enhancement methods." Thesis, University of Surrey, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Xudong. "Wavelet-domain hyperspectral soil texture classification." Master's thesis, Mississippi State : Mississippi State University, 2004. http://library.msstate.edu/etd/show.asp?etd=etd-04012004-142420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chanerley, Andrew A. "Seismic correction in the wavelet domain." Thesis, University of East London, 2014. http://roar.uel.ac.uk/4395/.

Full text
Abstract:
This thesis summarises novel approaches and methods in the wavelet domain employed and published in the literature by the author for the correction and processing of time-series data from recorded seismic events, obtained from strong motion accelerographs. Historically, the research developed to first de-convolve the instrument response from legacy analogue strong-motion instruments, of which there are a large number. This was to make available better estimates of the acceleration ground motion before the more problematic part of the research that of obtaining ground velocities and displacements. The characteristics of legacy analogue strongmotion instruments are unfortunately in most cases not available, making it difficult to de-couple the instrument response. Essentially this is a system identification problem presented and summarised therein with solutions that are transparent to this lack of instrument data. This was followed by the more fundamental and problematic part of the research that of recovering the velocity and displacement from the recorded data. In all cases the instruments are tri-axial, i.e. translation only. This is a limiting factor and leads to distortions manifest by dc shifts in the recorded data as a consequence of the instrument pitching, rolling and yawing during seismic events. These distortions are embedded in the translation acceleration time–series, their contributions having been recorded by the same tri-axial sensors. In the literature this is termed ‘baseline error’ and it effectively prevents meaningful integration to velocity and displacement. Sophisticated methods do exist, which recover estimates of velocity and displacement, but these require a good measure of expertise and do not recover all the possible information from the recorded data. A novel, automated wavelet transform method developed by the author and published in the earthquake engineering literature is presented. This surmounts the problem of obtaining the velocity and displacement and in addition recovers both a low-frequency pulse called the ‘fling’, the displacement ‘fling-step’ and the form of the baseline error, both inferred in the literature, but hitherto never recovered. Once the acceleration fling pulse is recovered meaningful integration becomes a reality. However, the necessity of developing novel algorithms in order to recover important information emphasises the weakness of modern digital instruments in that they are all tri- rather than sextaxial instruments.
APA, Harvard, Vancouver, ISO, and other styles
6

Lebed, Evgeniy. "Sparse signal recovery in a transform domain." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/4171.

Full text
Abstract:
The ability to efficiently and sparsely represent seismic data is becoming an increasingly important problem in geophysics. Over the last thirty years many transforms such as wavelets, curvelets, contourlets, surfacelets, shearlets, and many other types of ‘x-lets’ have been developed. Such transform were leveraged to resolve this issue of sparse representations. In this work we compare the properties of four of these commonly used transforms, namely the shift-invariant wavelets, complex wavelets, curvelets and surfacelets. We also explore the performance of these transforms for the problem of recovering seismic wavefields from incomplete measurements.
APA, Harvard, Vancouver, ISO, and other styles
7

LOUREIRO, FELIPE PRADO. "ACOUSTIC MODELING IN THE WAVELET TRANSFORM DOMAIN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4915@1.

Full text
Abstract:
PETRÓLEO BRASILEIRO S. A.
O processamento de sinais sísmicos é peça chave na exploração petrolífera. O caminho entre aquisição de dados e interpretação sísmica é composto por uma trilha de processos interdependentes, entre eles os processos de modelagem e migração. A dissertação apresenta a composição de um algoritmo de modelagem acústica 2D no domínio da transformada wavelet a partir de ferramentas próprias e outras já existentes na literatura. São estabelecidas as aproximações necessárias à solução em meios heterogêneos e à independência entre os subdomínios de processamento. Esta independência possibilita a exploração de técnicas de processamento paralelo. Através de exemplos, seu desempenho é avaliado com comparações à solução via diferenças finitas. Estas soluções são ainda submetidas ao mesmo processo de migração baseado em um terceiro modo de solução.
Seismic signal processing is a key step to oil exploration. The path between data acquisition and seismic interpretation is composed by a sequence of interdependent processes, among which are modeling and migration processes. A 2D acoustic modeling algorithm in wavelet Transform domain, based on custom tools and tools already made known in literature is presented. Approximations necessary for the solution in inhomogeneous media and for complete independence between processing subspaces are established. Such independence allows exploration of parallel processing techniques. Throughout examples, performance is evaluated in comparison to finite-difference solution. These solutions are further processed by a migration technique based in yet another solution method.
APA, Harvard, Vancouver, ISO, and other styles
8

Goda, Matthew. "Wavelet domain image restoration and super-resolution." Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/289808.

Full text
Abstract:
Multi-resolution techniques, and especially the wavelet transform provide unique benefits in image representation and processing not otherwise possible. While wavelet applications in image compression and denoising have become extremely prevalent, their use in image restoration and super-resolution has not been exploited to the same degree. One issue is the extension 1-D wavelet transforms into 2-D via separable transforms versus the non-separability of typical circular aperture imaging systems. This mismatch leads to performance degradations. Image restoration, the inverse problem to image formation, is the first major focus of this research. A new multi-resolution transform is presented to improve performance. The transform is called a Radially Symmetric Discrete Wavelet-like Transform (RS-DWT) and is designed based on the non-separable blurring of the typical incoherent circular aperture imaging system. The results using this transform show marked improvement compared to other restoration algorithms both in Mean Square Error and visual appearance. Extensions to the general algorithm that further improve results are discussed. The ability to super-resolve imagery using wavelet-domain techniques is the second major focus of this research. Super-resolution, the ability to reconstruct object information lost in the imaging process, has been an active research area for many years. Multiple experiments are presented which demonstrate the possibilities and problems associated with super-resolution in the wavelet-domain. Finally, super-resolution in the wavelet domain using Non-Linear Interpolative Vector Quantization is studied and the results of the algorithm are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Ngadiran, Ruzelita. "Rate scalable image compression in the wavelet domain." Thesis, University of Newcastle Upon Tyne, 2012. http://hdl.handle.net/10443/1437.

Full text
Abstract:
This thesis explores image compression in the wavelet transform domain. This the- sis considers progressive compression based on bit plane coding. The rst part of the thesis investigates the scalar quantisation technique for multidimensional images such as colour and multispectral image. Embedded coders such as SPIHT and SPECK are known to be very simple and e cient algorithms for compression in the wavelet do- main. However, these algorithms require the use of lists to keep track of partitioning processes, and such lists involve high memory requirement during the encoding process. A listless approach has been proposed for multispectral image compression in order to reduce the working memory required. The earlier listless coders are extended into three dimensional coder so that redundancy in the spectral domain can be exploited. Listless implementation requires a xed memory of 4 bits per pixel to represent the state of each transformed coe cient. The state is updated during coding based on test of sig- ni cance. Spectral redundancies are exploited to improve the performance of the coder by modifying its scanning rules and the initial marker/state. For colour images, this is done by conducting a joint the signi cant test for the chrominance planes. In this way, the similarities between the chrominance planes can be exploited during the cod- ing process. Fixed memory listless methods that exploit spectral redundancies enable e cient coding while maintaining rate scalability and progressive transmission. The second part of the thesis addresses image compression using directional filters in the wavelet domain. A directional lter is expected to improve the retention of edge and curve information during compression. Current implementations of hybrid wavelet and directional (HWD) lters improve the contour representation of compressed images, but su er from the pseudo-Gibbs phenomenon in the smooth regions of the images. A di erent approach to directional lters in the wavelet transforms is proposed to remove such artifacts while maintaining the ability to preserve contours and texture. Imple- mentation with grayscale images shows improvements in terms of distortion rates and the structural similarity, especially in images with contours. The proposed transform manages to preserve the directional capability without pseudo-Gibbs artifacts and at the same time reduces the complexity of wavelet transform with directional lter. Fur-ther investigation to colour images shows the transform able to preserve texture and curve.
APA, Harvard, Vancouver, ISO, and other styles
10

Avaritsioti, Eleni. "Financial time series prediction in the wavelet domain." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Dietze, Martin. "Second generation image watermarking in the wavelet domain." Thesis, University of Buckingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kandadai, Srivatsan. "OBJECT DETECTION AND LOCALIZATION IN THE WAVELET DOMAIN." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/605573.

Full text
Abstract:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
We are interested in the problem of detecting and localizing objects in the compressed domain. The practical uses of this research are video surveillance, queries over digital library archives and teleconferencing. Most image operations, such as object recognition, are formulated as sequences of operations in the image domain. Such methods need direct access to pixel information as a starting point, but pixel information is not directly available in a compressed image stream. The standards that have emerged for still-image and video compression each contain steps that are commonly found in compression algorithms, like linear transformations, coefficient quantization, run-length coding and entropy coding. Coders like JPEG 2000 and SPHIT are built around the wavelet transform. Thus as a step toward detection and localization of objects embedded in the compressed bit stream we consider here the problem of localizing and detection in the wavelet domain.
APA, Harvard, Vancouver, ISO, and other styles
13

Liao, Zhiwu. "Image denoising using wavelet domain hidden Markov models." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Voulgaris, Georgios. "Techniques for content-based image characterization in wavelets domain." Thesis, University of South Wales, 2008. https://pure.southwales.ac.uk/en/studentthesis/techniques-for-contentbased-image-characterization-in-wavelets-domain(14c72275-a91e-4ba7-ada8-bdaee55de194).html.

Full text
Abstract:
This thesis documents the research which has led to the design of a number of techniques aiming to improve the performance of content-based image retrieval (CBIR) systems in wavelets domain using texture analysis. Attention was drawn on CBIR in transform domain and in particular wavelets because of the excellent characteristics for compression and texture extraction applications and the wide adoption in both the research community and the industry. The issue of performance is addressed in terms of accuracy and speed. The rationale for this research builds upon the conclusion that CBIR has not yet reached a good performance balance of accuracy, efficiency and speed for wide adoption in practical applications. The issue of bridging the sensory gap, which is defined as "[the difference] between the object in the real world and the information in a (computational) description derived from a recording of that scene." has yet to be resolved. Furthermore, speed improvement remains an uncharted territory as is feature extraction directly from the bitstream of compressed images. To address the above requirements the first part of this work introduces three techniques designed to jointly address the issue of accuracy and processing cost of texture characterization in wavelets domain. The second part introduces a new model for mapping the wavelet coefficients of an orthogonal wavelet transformation to a circular locus. The model is applied in order to design a novel rotation-invariant texture descriptor. All of the aforementioned techniques are also designed to bridge the gap between texture-based image retrieval and image compression by using appropriate compatible design parameters. The final part introduces three techniques for improving the speed of a CBIR query through more efficient calculation of the Li-distance, when it is used as an image similarity metric. The contributions conclude with a novel technique which, in conjunction with a widely adopted wavelet-based compression algorithm, extracts texture information directly from the compressed bit-stream for speed and storage requirements savings. The experimental findings indicate that the proposed techniques form a solid groundwork which can be extended to practical applications.
APA, Harvard, Vancouver, ISO, and other styles
15

Khelifi, Fouad. "Image compression and watermarking in the wavelet transform domain." Thesis, Queen's University Belfast, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.486229.

Full text
Abstract:
This thesis is concerned with an investigation into novel image compression and watermarking techniques in the wavelet transform domain. Compression refers to the reduction in the representative bit-budget of data with an acceptable distortion. Such an application is widely encountered in our daily life as a requirement due to the rapid growth of digital multimedia technology that overwhelms the ca- .pacity of the available communication channels. Watermarking is the process of embedding a hidden pattern, called watermark, that represents the ownership of an authorized user into a host data. Watermarking systems are expected to play an important role in meeting at least two major challenges: Imperceptibility of the watermark and the robustness to intentional and unintentional data manipulations. Driven by the urgent need to protect digital media content that is being broadly distributed and shared through the Internet by an ever-increasing number of users, the field of digital watermarking has witnessed an extremely fast-growing development over the last decade. The first part of the thesis addresses the compression of digital images with state-of-the-art wavelet-based coders. Image compression is basically achieved by reducing the statistical redundancy existing within the data. The wavelet transform efficiently reduces such a redundancy and exhibits different attractive features exploitable by the most powerful coders available in the literature. In this thesis, we propose efficient extensions of the much celebrated scalable SPIRT coder to color and multispectral image compression. Also an efficient SPECK-based lossless coder is proposed for multispectral data. The idea is to adjust the algorithms in order to join each group of two consecutive bands as they show high similarities and strong correlation. In the second pat of this thesis, watermarking in the transform domain is addressed. Different transforms are considered and a number of the most efficient multiplicative watermark detectors are assessed. We show the advantages/disadvantages of each transform domain for watermarking in a comparative study. Also, an essential enhancement is introduced to the optimum detection rule which is shown to be more accurate and efficient. An adaptive watermarking technique is also proposed in the wavelet transform domain that exploits the characteristics of the human visual system. It is. shown to outperform the conventional wavelet-based technique. Finally, watermarking in the compressed domain is discussed and a new robust watermarking scheme in the SPIRT-compressed bit-stream is presented.
APA, Harvard, Vancouver, ISO, and other styles
16

Cui, Suxia. "Motion estimation and compensation in the redundant wavelet domain." Diss., Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-07242003-170057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Morand, Claire. "Segmentation spatio-temporelle et indexation vidéo dans le domaine des représentations hiérarchiques." Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13888/document.

Full text
Abstract:
L'objectif de cette thèse est de proposer une solution d'indexation ``scalable'' et basée objet de flux vidéos HD compressés avec Motion JPEG2000. Dans ce contexte, d'une part, nous travaillons dans le domaine transformé hiérachique des ondelettes 9/7 de Daubechies et, d'autre part, la représentation ``scalable'' nécessite des méthodes en multirésolution, de basse résolution vers haute résolution. La première partie de ce manuscrit est dédiée à la définition d'une méthode d'extraction automatique des objets en mouvement. Elle repose sur la combinaison d'une estimation du mouvement global robuste et d'une segmentation morphologique couleur à basse résolution. Le résultat est ensuite affiné en suivant l'ordre des données dans le flux scalable. La deuxième partie est consacrée à la définition d'un descripteur sur les objets précédemment extraits, basé sur les histogrammes en multirésolution des coefficients d'ondelettes. Enfin, les performances de la méthode d'indexation proposée sont évaluées dans le contexte de requêtes scalables de recherche de vidéos par le contenu
This thesis aims at proposing a solution of scalable object-based indexing of HD video flow compressed by MJPEG2000. In this context, on the one hand, we work in the hierarchical transform domain of the 9/7 Daubechies' wavelets and, on the other hand, the scalable representation implies to search for multiscale methods, from low to high resolution. The first part of this manuscript is dedicated to the definition of a method for automatic extraction of objects having their own motion. It is based on a combination of a robust global motion estimation with a morphological color segmentation at low resolution. The obtained result is then refined following the data order of the scalable flow. The second part is the definition of an object descriptor which is based on the multiscale histograms of the wavelet coefficients. Finally, the performances of the proposed method are evaluated in the context of scalable content-based queries
APA, Harvard, Vancouver, ISO, and other styles
18

Fujii, Masafumi. "A time-domain Haar-wavelet-based multiresolution technique for electromagnetic field analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0006/NQ40454.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kane, Jonathan A. (Jonathan Andrew) 1973. "wavelet domain inversion and joint deconvolution/interpolation of geophysical data." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/59651.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2003.
Includes bibliographical references (leaves 168-174).
This thesis presents two innovations to geophysical inversion. The first provides a framework and an algorithm for combining linear deconvolution methods with geostatistical interpolation techniques. This allows for sparsely sampled data to aid in image deblurring problems, or, conversely, noisy and blurred data to aid in sample interpolation. In order to overcome difficulties arising from high dimensionality, the solution must be derived in the correct framework and the structure of the problem must be exploited by an iterative solution algorithm. The effectiveness of the method is demonstrated first on a synthetic problem involving satellite remotely sensed data, and then on a real 3-D seismic data set combined with well logs. The second innovation addresses how to use wavelets in a linear geophysical inverse problem. Wavelets have lead to great successes in image compression and denoising, so it is interesting to see what, if anything, they can do for a general linear inverse problem. It is shown that a simple nonlinear operation of weighting and thresholding wavelet coefficients can consistently outperform classical linear inverse methods in terms of mean-square error across a broad range of noise magnitude in the data. Wavelets allow for an adaptively smoothed solution: smoothed more in uninteresting regions, less at geologically important transitions.
(cont.) A third issue is also addressed, somewhat separate from the first two: the correct manipulation of discrete geophysical data. The theory of fractional splines is introduced, which allows for optimal approximation of real signals on a digital computer. Using splines, it can be shown that a linear operation on the spline can be equivalently represented by a matrix operating on the coefficients of a certain spline basis function. The form of the matrix, however, depends completely on the spline basis, and incorrect discretization of the operator into a matrix can lead to large errors in the resulting matrix/vector product.
by Jonathan A. Kane.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Werner, Manuel. "Adaptive wavelet frame domain decomposition methods for elliptic operator equations /." Berlin : Logos-Verl, 2009. http://d-nb.info/99721984X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Angelillo, Andrea <1983&gt. "Three-Component Seismic Array Analysis in the Discrete Wavelet Domain." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amsdottorato.unibo.it/8053/1/Angelillo_Andrea_tesi.pdf.

Full text
Abstract:
The purpose of this PhD thesis was the development of an innovative methodology for the seismic array data analysis, named DWT-MuSiC (Discrete Wavelet Transform -Multiple Signals Classification). DWT-MuSiC is a new proposed method intended to be able to perform near-real time analysis relating to the detection of different seismic wave field and their characterization, starting from raw seismic array data. The innovative point of DWT-MuSiC is that it is thought to combine the resolution of the MuSiC (Multiple Signals Classification) algorithm, methodology used for frequency estimation and source location, proposed by Schmidt (1986), and the potentialities of the discrete wavelet domain analysis. The DWT-MuSiC, in fact other than detecting the presence of different wavefronts, provides both their direction of arrival and apparent speed of advancement, returning even information about the polarization of each identified phases, preserving furthermore spatial and frequency information of the original signals. The synthetic tests performed with the DWT-MuSiC analysis have shown the ability of the algorithm to detect wake signals also in presence of a low signal to noise ratio. The method returned correct output also in those cases where the analysis is performed on multiple seismic phases that overlap in terms of impinging time or/and frequency content of the signals themselves. The comparison between other methodology like the beamforming and MuSiC applied in the Fourier domain, shown how the information returned by the DWT-MuSiC was much more complete. The use of discrete wavelets and the optimization algorithm implemented in the analysis permit moreover to save computational time analyzing only the important data, allowing the recovery of hidden information when more than one wavefronts overlap. The application to real case on data collected at Mount Vesuvius (Italy) and at Krafka caldera (Island) were important to test the applicability of the methodology in different contests.
APA, Harvard, Vancouver, ISO, and other styles
22

Noutsos, Apostolos. "Long memory in volatility, modelling and forecasting in the wavelet domain." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/8800.

Full text
Abstract:
This thesis examines the long memory property of financial time series specifically returns and volatility. In our analysis we use wavelets and their decorrelating property to model long memory processes. For returns, it has been observed that their long memory parameter is very close to 0. Based on improvements we have proposed for a wavelet estimation method, and on Monte Carlo tests for the power of several semi-parametric methods in distinguishing long memory, we re-examine the long memory property of returns of several financial assets. For volatility a Bayesian estimation method in the wavelet domain is proposed for long memory stochastic volatility models with shorter memory dynamics as well, in the form of autoregressive or moving average parameters. Although estimation of these types of models can be unstable, we apply them to data describing various financial assets where we find that the dynamics involved might be more complicated than expected, since both long and short memory parameters are found to be statistically significant. Finally, we compare on the basis of density forecasts, for the first time, the long memory stochastic volatility model with GARCH-type models for data describing a range of financial assets.
APA, Harvard, Vancouver, ISO, and other styles
23

Gale, Timothy Edward. "Improved detection and quantisation of keypoints in the complex wavelet domain." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277713.

Full text
Abstract:
An algorithm which is able to consistently identify features in an image is a basic building block of many object recognition systems. Attaining sufficient consistency is challenging, because factors such as pose and lighting can dramatically change a feature’s appearance. Effective feature identification therefore requires both a reliable and accurate keypoint detector and a discriminative categoriser (or quantiser). The Dual Tree Complex Wavelet Transform (DTCWT) decomposes an image into oriented subbands at a range of scales. The resulting domain is arguably well suited for further image analysis tasks such as feature identification. This thesis develops feature identification in the complex wavelet domain, building on previous keypoint detection work and exploring the use of random forests for descriptor quantisation. Firstly, we extended earlier work on keypoint detection energy functions. Existing complex wavelet based detectors were observed to suffer from two defects: a tendency to produce keypoints on straight edges at particular orientations and sensitivity to small translations of the image. We introduced a new corner energy function based on the Same Level Product (SLP) transform. This function performed well compared to previous ones, combining competitive edge rejection and positional stability properties. Secondly, we investigated the effect of changing the resolution at which the energy function is sampled. We used the undecimated DTCWT to calculate energy maps at the same resolution as the original images. This revealed the presence of fine details which could not be accurately interpolated from an energy map at the standard resolution. As a result, doubling the resolution of the map along each axis significantly improved both the reliability and posi-tional accuracy of detections. However, calculating the map using interpolated coefficients resulted in artefacts introduced by inaccuracies in the interpolation. We therefore proposed a modification to the standard DTCWT structure which doubles its output resolution for a modest computational cost. Thirdly, we developed a random forest based quantiser which operates on complex wavelet polar matching descriptors, with optional rotational invariance. Trees were evaluated on the basis of how consistently they quantised features into the same bins, and several examples of each feature were obtained by means of tracking. We found that the trees produced the most consistent quantisations when they were trained with a second set of tracked keypoints. Detecting keypoints using the the higher resolution energy maps also resulted in more consistent quantiser outputs, indicating the importance of the choice of detector on quantiser performance. Finally, we introduced a fast implementation of the DTCWT, keypoint detection and descriptor extraction algorithms for OpenCL-capable GPUs. Several aspects were optimised to enable it to run more efficiently on modern hardware, allowing it to process HD footage in faster than real time. This particularly aided the development of the detector algorithms by permitting interactive exploration of their failure modes using a live camera feed.
APA, Harvard, Vancouver, ISO, and other styles
24

Bajic, Vladan. "DESIGN AND IMPLEMENTATION OF AN ADAPTIVE NOISE CANCELING SYSTEM IN WAVELET TRANSFORM DOMAIN." University of Akron / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=akron1132784671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Anantharaman, B. "Compressed Domain Processing of MPEG Audio." Thesis, Indian Institute of Science, 2001. https://etd.iisc.ac.in/handle/2005/3914.

Full text
Abstract:
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are a) Extracting wavelet coefficients in the MPEG compressed domain. b) Wavelet based pitch extraction in MPEG compressed domain. c) Time Scale Modifications of MPEG audio. d) Watermarking of MPEG audio. The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients. Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported. In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals. The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
APA, Harvard, Vancouver, ISO, and other styles
26

Anantharaman, B. "Compressed Domain Processing of MPEG Audio." Thesis, Indian Institute of Science, 2001. http://hdl.handle.net/2005/68.

Full text
Abstract:
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are a) Extracting wavelet coefficients in the MPEG compressed domain. b) Wavelet based pitch extraction in MPEG compressed domain. c) Time Scale Modifications of MPEG audio. d) Watermarking of MPEG audio. The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients. Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported. In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals. The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
APA, Harvard, Vancouver, ISO, and other styles
27

Hammarqvist, Ulf. "Audio editing in the time-frequency domain using the Gabor Wavelet Transform." Thesis, Uppsala universitet, Centrum för bildanalys, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-153634.

Full text
Abstract:
Visualization, processing and editing of audio, directly on a time-frequency surface, is the scope of this thesis. More precisely the scalogram produced by a Gabor Wavelet transform is used, which is a powerful alternative to traditional techinques where the wave form is the main visual aid and editting is performed by parametric filters. Reconstruction properties, scalogram design and enhancements as well audio manipulation algorithms are investigated for this audio representation.The scalogram is designed to allow a flexible choice of time-frequency ratio, while maintaining high quality reconstruction. For this mean, the Loglet is used, which is observed to be the most suitable filter choice.  Re-assignmentare tested, and a novel weighting function using partial derivatives of phase is proposed.  An audio interpolation procedure is developed and shown to perform well in listening tests.The feasibility to use the transform coefficients directly for various purposes is investigated. It is concluded that Pitch shifts are hard to describe in the framework while noise thresh holding works well. A downsampling scheme is suggested that saves on operations and memory consumption as well as it speeds up real world implementations significantly. Finally, a Scalogram 'compression' procedure is developed, allowing the caching of an approximate scalogram.
APA, Harvard, Vancouver, ISO, and other styles
28

Benhmad, François. "La WVaR (Wavelet Value at Risk) : une analyse temps-fréquence de la VaR du CAC40." Thesis, Montpellier 1, 2010. http://www.theses.fr/2010MON10024.

Full text
Abstract:
Malgré la multiplicité des méthodes d'estimation de la VaR, elles souffrent d'une faiblesse fondamentale. En effet, elles ne font aucune distinction entre l'information captée à basse fréquence et celle captée à haute fréquence. Ce qui revient à supposer de façon implicite que l'information contenue dans les données historiques a la même importance quel que soit l'horizon temporel de l'investisseur c'est-à-dire sa fréquence de trading (intra-journalière, journalière, hebdomadaire, mensuelle,..). Mais, accepter une telle hypothèse revient à supposer que les marchés financiers sont homogènes. Ce qui est contraire à la réalité empirique. En effet, les marchés financiers sont caractérisés par une grande hétérogénéité d'acteurs. L'objet de notre thèse est d'apporter une contribution à l'estimation de la VaR basée sur la décomposition de la volatilité dans le domaine des fréquences. Ce qui nous permet de mette en évidence l'influence de l'hétérogénéité des horizons temporels des acteurs des marchés financiers sur l'estimation de la Value at Risk. Pour cela,nous faisons appel à un outil statistique susceptible de nous procurer de l'information temporelle sur la volatilité et de l'information fréquentielle sur la fréquence de trading des différents acteurs des marchés financiers: l'approche temps-fréquence de la transformée en ondelettes
Although multiplicity of VaR estimate approaches,they suffer from a fundamental weakness.They don't make any distiction between informations captured in a high frequency and in a low frequency manner.It is an implicit assumption of homogeneity of fiancial markets in contrast to empirical facts. In our thesis, we try to construct a VaR model based on volatility decomposition in the frequency domain.It enables us to show how the time horizons heterogeneity of financial markets participants could influence value at risk estimates.We use a statistical tool able to give us temporal information about volatility and frequencial information about trading frequencies of market participants:the time frequency approach of wavelet transform
APA, Harvard, Vancouver, ISO, and other styles
29

Kim, Il-Ryeol. "Wavelet domain partition-based signal processing with applications to image denoising and compression." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 2.98 Mb., 119 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3221054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ng, Edmund. "Speckle Noise Reduction via Homomorphic Elliptical Threshold Rotations in the Complex Wavelet Domain." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/812.

Full text
Abstract:
Many clinicians regard speckle noise as an undesirable artifact in ultrasound images masking the underlying pathology within a patient. Speckle noise is a random interference pattern formed by coherent radiation in a medium containing many sub-resolution scatterers. Speckle has a negative impact on ultrasound images as the texture does not reflect the local echogenicity of the underlying scatterers. Studies have shown that the presence of speckle noise can reduce a physician's ability to detect lesions by a factor of eight. Without speckle, small high-contrast targets, low contrast objects, and image texture can be deduced quite readily.

Speckle filtering of medical ultrasound images represents a critical pre-processing step, providing clinicians with enhanced diagnostic ability. Efficient speckle noise removal algorithms may also find applications in real time surgical guidance assemblies. However, it is vital that regions of interests are not compromised during speckle removal. This research pertains to the reduction of speckle noise in ultrasound images while attempting to retain clinical regions of interest.

Recently, the advance of wavelet theory has lead to many applications in noise reduction and compression. Upon investigation of these two divergent fields, it was found that the speckle noise tends to rotate an image's homomorphic complex-wavelet coefficients. This work proposes a new speckle reduction filter involving a counter-rotation of these complex-wavelet coefficients to mitigate the presence of speckle noise. Simulations suggest the proposed denoising technique offers superior visual quality, though its signal-to-mean-square-error ratio (S/MSE) is numerically comparable to adaptive frost and kuan filtering.

This research improves the quality of ultrasound medical images, leading to improved diagnosis for one of the most popular and cost effective imaging modalities used in clinical medicine.
APA, Harvard, Vancouver, ISO, and other styles
31

Cai, Weiting. "High performance shift invariant motion estimation and compensation in wavelet domain video compression." FIU Digital Commons, 2003. http://digitalcommons.fiu.edu/etd/1965.

Full text
Abstract:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: 1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. 2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMC) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures.
APA, Harvard, Vancouver, ISO, and other styles
32

Bushyager, Nathan Adam. "Novel adaptive time-domain techniques for the modeling and design of complex RF and wireless structures." Diss., Available online, Georgia Institute of Technology, 2005, 2004. http://etd.gatech.edu/theses/available/etd-11182004-145238/.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2005.
Tentzeris, Manos, Committee Chair ; Laskar, Joy, Committee Member ; Peterson, Andrew, Committee Member ; Papapolymerou, Ioannis, Committee Member ; Sotiropoulos, Fotis, Committee Member. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
33

Long, Zhiling. "Statistical image modeling in the contourlet domain with application to texture segmentation." Diss., Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-11082007-161335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Boileau, Luc Carleton University Dissertation Engineering Systems and Computer. "Image compression using multilevel block truncation coding and pattern matching in the wavelet domain." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
35

Fu, Ming Fai. "Motion estimation and compensation in wavelet domain and fast global motion estimation for video coding /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20FU.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 98-102). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Xiuyun. "Optimization of the assessment of cerebral autoregulation in neurocritical care unit." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/265164.

Full text
Abstract:
Introduction Cerebral autoregulation (CA) refers to the physiological mechanisms in the brain to maintain constant blood flow despite changes in cerebral perfusion pressure (CPP). It plays an important protective role against the danger of ischaemia or oedema of the brain. Over the years, various methods for CA assessment have been proposed, while most commonly used parameters include the autoregulation index (ARI), which grades CA into ten levels; transfer function (TF) analysis, describing CA as a high pass filter; the mean flow index (Mx), that estimates CA through the correlation coefficient between slow waves of mean cerebral blood flow velocity (CBFV) and CPP; and pressure reactivity index (PRx), calculated as a moving correlation coefficient between mean arterial blood pressure (ABP) and intracranial pressure (ICP). However, until now, how these parameters are related with each other is still not clear. A comprehensive investigation of the relationship between all these parameters is therefore needed. In addition, the methods mentioned above mostly assume the system being analysed is linear and the signals are stationary, with the announcement of non-stationary characteristic of CA, a more robust method, in particular suitable for non-stationary signal analysis, needs to be explored. Objectives and Methods This thesis addresses three primary questions: 1. What are the relationships between currently widely used CA parameters, i.e. Mx, ARI, TF parameters, from theoretical and practical point of view? 2. It there an effective method that can be introduced to assess CA, which is suitable for analyses of non-stationary signals? 3. How can bedside monitoring of cerebral autoregulation be improved in traumatic brain injury patients? These general aims have been translated into a series of experiments, retrospective analyses and background studies that are presented in different chapters of this thesis. Results and Conclusions This PhD project carefully scrutinised currently used CA assessment methodologies in TBI patients, demonstrating significant relationships between ARI, Mx and TF phase. A new introduced wavelet-transform-based method, wPRx was validated and showed more stable result for CA assessment than the well-established parameter, PRx. A multi-window approach with weighting system for optimal CPP estimation was described. The result showed a significant improvement in the continuity and stability of CPPopt estimation, which made it possible to be applied in the future clinical management of TBI patients.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Jing. "Digital Signal Characterization for Seizure Detection Using Frequency Domain Analysis." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296861.

Full text
Abstract:
Nowadays, a significant proportion of the population in the world is affected by cerebral diseases like epilepsy. In this study, frequency domain features of electroencephalography (EEG) signals were studied and analyzed, with a view being able to detect epileptic seizures more easily. The power spectrum and spectrogram were determined by using fast fourier transform (FFT) and the scalogram was found by performing continuous wavelet transform (CWT) on the testing EEG signal. In addition, two schemes, i.e. method 1 and method 2, were implemented for detecting epileptic seizures and the applicability of the two methods to electrocardiogram (ECG) signals were tested. A third method for anomaly detection in ECG signals was tested.
En signifikant del av population påverkas idag av neurala sjukdomar som epilepsi. I denna studie studerades och analyserades egenskaper inom frekvensdomänen av elektroencefalografi (EEG), med sikte på att lättare kunna upptäcka epileptiska anfall. Effektspektrumet och spektrogramet bestämdes med hjälp av en snabb fouriertransform och skalogrammet hittades genom att genomföra en kontinuerlig wavelet transform (CWT) på testsignalen från EEGsignalen. I addition till detta skapades två system, metod 1 och metod 2, som implementerades för att upptäcka epileptiska anfall. Användbarheten av dessa två metoder inom elektrokardiogramsignaler (ECG) testades. En tredje metod för anomalidetektering i ECGsignaler testades.
APA, Harvard, Vancouver, ISO, and other styles
38

Van, Huyssteen Rudolph Hendrik. "Comparative evaluation of video watermarking techniques in the uncompressed domain." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71842.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: Electronic watermarking is a method whereby information can be imperceptibly embedded into electronic media, while ideally being robust against common signal manipulations and intentional attacks to remove the embedded watermark. This study evaluates the characteristics of uncompressed video watermarking techniques in terms of visual characteristics, computational complexity and robustness against attacks and signal manipulations. The foundations of video watermarking are reviewed, followed by a survey of existing video watermarking techniques. Representative techniques from different watermarking categories are identified, implemented and evaluated. Existing image quality metrics are reviewed and extended to improve their performance when comparing these video watermarking techniques. A new metric for the evaluation of inter frame flicker in video sequences is then developed. A technique for possibly improving the robustness of the implemented discrete Fourier transform technique against rotation is then proposed. It is also shown that it is possible to reduce the computational complexity of watermarking techniques without affecting the quality of the original content, through a modified watermark embedding method. Possible future studies are then recommended with regards to further improving watermarking techniques against rotation.
AFRIKAANSE OPSOMMING: ’n Elektroniese watermerk is ’n metode waardeur inligting onmerkbaar in elektroniese media vasgelê kan word, met die doel dat dit bestand is teen algemene manipulasies en doelbewuste pogings om die watermerk te verwyder. In hierdie navorsing word die eienskappe van onsaamgeperste video watermerktegnieke ondersoek in terme van visuele eienskappe, berekeningskompleksiteit en weerstandigheid teen aanslae en seinmanipulasies. Die onderbou van video watermerktegnieke word bestudeer, gevolg deur ’n oorsig van reedsbestaande watermerktegnieke. Verteenwoordigende tegnieke vanuit verskillende watermerkkategorieë word geïdentifiseer, geïmplementeer en geëvalueer. Bestaande metodes vir die evaluering van beeldkwaliteite word bestudeer en uitgebrei om die werkverrigting van die tegnieke te verbeter, spesifiek vir die vergelyking van watermerktegnieke. ’n Nuwe stelsel vir die evaluering van tussenraampie flikkering in video’s word ook ontwikkel. ’n Tegniek vir die moontlike verbetering van die geïmplementeerde diskrete Fourier transform tegniek word voorgestel om die tegniek se bestandheid teen rotasie te verbeter. Daar word ook aangetoon dat dit moontlik is om die berekeningskompleksiteit van watermerktegnieke te verminder, sonder om die kwaliteit van die oorspronklike inhoud te beïnvloed, deur die gebruik van ’n verbeterde watermerkvasleggingsmetode. Laastens word aanbevelings vir verdere navorsing aangaande die verbetering van watermerktegnieke teen rotasie gemaak.
APA, Harvard, Vancouver, ISO, and other styles
39

Deighan, Andrew J. "Applications of wavelet transforms to the suppression of coherent noise from seismic data in the pre-stack domain." Thesis, University of Glasgow, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Rahmani, Maryam. "On the calculation of time-domain impulse-response of systems from band-limited scattering-parameters using wavelet transform." Thesis, Mississippi State University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10270053.

Full text
Abstract:

In the aspect of electric-ship grounding, the time-domain behavior of the ship hull is needed. The grounding scheme impacts the nature of voltage transients during switching events and faults, identifiability and locatability of ground faults, fault current levels, and power quality. Due to the large size of ships compared with the wavelengths of the desired signals, time-domain measurement or simulation is a time-consuming process. Therefore, it is preferred that the behavior be studied in the frequency-domain. In the frequency-domain one can break down the whole ship hull into small blocks and find the frequency behavior of each block (scattering parameters) in a short time and then con- nect these blocks and find the whole ship hull scattering parameters. Then these scattering parameters should be transferred to the time-domain. The problem with this process is that the measured frequency-domain data (or the simulated data) is band-limited so, while calculating time-domain solutions, due to missing DC and low frequency content the time-domain response encounters causality, passivity and time-delay problems. Despite availability of several software and simulation packets that convert frequency-domain information to time-domain, all are known to suffer from the above mentioned problems. This dissertation provides a solution for computing the Time-Domain Impulse-Response for a system by using its measured or simulated scattering parameters. In this regard, a novel wavelet computational approach is introduced.

APA, Harvard, Vancouver, ISO, and other styles
41

Žitkevičius, Evaras. "Analysis of medical images in frequency and space-frequency domains." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20071204_102356-91944.

Full text
Abstract:
Images of CT and MRI are grayscale images and usually they are analyzed by radiologists on display either on transparency screen. Then objects in images are recognized by their properties like average luminosity of the region, localization, shape, dimensions, etc. The process of visual recognition is affected by many factors. The totality of factors determines uncertainties of diagnostics. More and more often there are used software utilities in the analysis of images which help to lower the uncertainties and speed up the analysis time. The aim of the work is investigation of medical image processing and analysis using frequency and joint space-frequency domains and developing of algorithms suitable for segmentation of regions of diseases.
Žmogaus galvos KT bei MR vaizdus analizuoja apmokyti diagnostikos specialistai – radiologai. Vaizdai, kuriuos turi analizuoti radiologas, yra pilkieji vaizdai, pateikiami monitoriuje arba skaidrėse. Juose tiriami objektai yra atpažįstami pagal tam tikrų vaizdo sričių pilkumo lygį, išsidėstymą, formą, dydį ir kitus požymius. Atpažinimo procesui turi įtakos eilė veiksnių, kurių visuma lemia diagnostines neapibrėžtis. Jų sumažinimui bei diagnostikos paspartinimui vis plačiau taikoma pagalbinė kompiuterinė programinė įranga, kuri, be to, suteikia daug papildomų trimačio vizualizavimo ir skaičiavimo galimybių. Šio darbo tikslas yra ištirti medicininių vaizdų analizės ir apdorojimo galimybes, taikant dažninius ir erdvinius dažninius spektrus, bei sudaryti algoritmus, skirtus ligų sričių ar kitų specifinių vaizdo sričių segmentavimui.
APA, Harvard, Vancouver, ISO, and other styles
42

Slezák, Pavel. "Filtrace signálů EKG pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218722.

Full text
Abstract:
The thesis deals with possibilities of using wavelet transform in applications dealing with noise reduction, primarily in the field of ECG signals denoising. We assess the impact of the various filtration parameters setting as the thresholding wavelet coefficients method, thresholds level setting and the selection of decomposition and reconstruction filter banks.. Our results are compared with the results of linear filtering. The results of wavelet Wieners filtration with pilot estimation are described below. Mainly, we tested a combination of decomposition and reconstruction filter banks. All the filtration methods described here are tested on real ECG records with additive myopotential noise character and are implemented in the Matlab environment.
APA, Harvard, Vancouver, ISO, and other styles
43

Scharf, Benjamin [Verfasser], Hans-Jürgen [Akademischer Betreuer] Schmeißer, Hans [Akademischer Betreuer] Triebel, and Leszek [Akademischer Betreuer] Skrzypczak. "Wavelets in function spaces on cellular domains / Benjamin Scharf. Gutachter: Hans-Jürgen Schmeißer ; Hans Triebel ; Leszek Skrzypczak." Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2013. http://d-nb.info/1033669970/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Scharf, Benjamin [Verfasser], Hans-Jürgen Akademischer Betreuer] Schmeißer, Hans [Akademischer Betreuer] [Triebel, and Leszek [Akademischer Betreuer] Skrzypczak. "Wavelets in function spaces on cellular domains / Benjamin Scharf. Gutachter: Hans-Jürgen Schmeißer ; Hans Triebel ; Leszek Skrzypczak." Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2013. http://nbn-resolving.de/urn:nbn:de:gbv:27-20130411-112946-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chakravarthy, Chinna Narayana Swamy Thrilok. "Combinational Watermarking for Medical Images." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5833.

Full text
Abstract:
Digitization of medical data has become a very important part of the modern healthcare system. Data can be transmitted easily at any time to anywhere in the world using Internet to get the best diagnosis possible for a patient. This digitized medical data must be protected at all times to preserve the doctor-patient confidentiality. Watermarking can be used as an effective tool to achieve this. In this research project, image watermarking is performed both in the spatial domain and the frequency domain to embed a shared image with medical image data and the patient data which would include the patient identification number. For the proposed system, Structural Similarity (SSIM) is used as an index to measure the quality of the watermarking process instead of Peak Signal to Noise Ratio (PSNR) since SSIM takes into account the visual perception of the images compared to PSNR which uses the intensity levels to measure the quality of the watermarking process. The system response under ideal conditions as well as under the influence of noise were measured and the results were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
46

Hu, Zhihong. "Multimodal 3-D segmentation of optic nerve head structures from spectral domain Oct volumes and color fundus photographs." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/3470.

Full text
Abstract:
Currently available methods for managing glaucoma, e.g. the planimetry on stereo disc photographs, involve a subjective component either by the patient or examiner. In addition, a few structures may overlap together on the essential 2-D images, which can decrease reproducibility. Spectral domain optical coherence tomography (SD-OCT) provides a 3-D, cross-sectional, microscale depiction of biological tissues. Given the wealth of volumetric information at microscale resolution available with SD-OCT, it is likely that better parameters can be obtained for measuring glaucoma changes that move beyond what is possible using fundus photography etc. The neural canal opening (NCO) is a 3-D single anatomic structure in SD-OCT volumes. It is proposed as a basis for a stable reference plane from which various optic nerve morphometric parameters can be derived. The overall aim of this Ph.D. project is to develop a framework to segment the 3-D NCO and its related structure retinal vessels using information from SD-OCT volumes and/or fundus photographs to aid the management of glaucoma changes. Based on the mutual positional relationship of the NCO and vessels, a multimodal 3-D scale-learning-based framework is developed to iteratively identify them in SD-OCT volumes by incorporating each other's pre-identified positional information. The algorithm first applies a 3-D wavelet-transform-learning-based layer segmentation and pre-segments the NCO using graph search. To aid a better NCO detection, the vessels are identified either using a SD-OCT segmentation approach incorporating the presegmented NCO positional information to the vessel classification or a multimodal approach combining the complementary features from SD-OCT volumes and fundus photographs (or a registered-fundus approach based on the original fundus vessel segmentation). The obtained vessel positional information is then used to help enhance the NCO segmentation by incorporating that to the cost function of graph search. Note that the 3-D wavelet transform via lifting scheme has been used to remove high frequency noises and extract texture properties in SD-OCT volumes etc. The graph search has been used for finding the optimal solution of 3-D multiple surfaces using edge and additionally regional information. In this work, the use of the 3-D wavelet-transform-learning-based cost function for the graph search is a further extension of the 3-D wavelet transform and graph search. The major contributions of this work include: 1) extending the 3-D graph theoretic segmentation to the use of 3-D scale-learning-based cost function, 2) developing a graph theoretic approach for segmenting the NCO in SD-OCT volumes, 3) developing a 3-D wavelet-transform-learning-based graph theoretic approach for segmenting the NCO in SD-OCT volumes by iteratively utilizing the pre-identified NCO and vessel positional information (from 4 or 5), 4) developing a vessel classification approach in SD-OCT volumes by incorporating the pre-segmented NCO positional information to the vessel classification to suppress the NCO false positives, and 5) developing a multimodal concurrent classification and a registered-fundus approach for better identifying vessels in SD-OCT volumes using additional fundus information.
APA, Harvard, Vancouver, ISO, and other styles
47

Pabel, Roland [Verfasser], Angela [Akademischer Betreuer] Kunoth, Gregor [Akademischer Betreuer] Gassner, and Helmut [Akademischer Betreuer] Harbrecht. "Adaptive Wavelet Methods for Variational Formulations of Nonlinear Elliptic PDES on Tensor-Product Domains / Roland Pabel. Gutachter: Angela Kunoth ; Gregor Gassner ; Helmut Harbrecht." Köln : Universitäts- und Stadtbibliothek Köln, 2015. http://d-nb.info/1072500485/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tikkanen, P. (Pauli). "Characterization and application of analysis methods for ECG and time interval variability data." Doctoral thesis, University of Oulu, 1999. http://urn.fi/urn:isbn:9514252144.

Full text
Abstract:
Abstract The quantitation of the variability in cardiovascular signals provides information about the autonomic neural regulation of the heart and the circulatory system. Several factors have an indirect effect on these signals as well as artifacts and several types of noise are contained in the recorded signal. The dynamics of RR and QT interval time series have also been analyzed in order to predict a risk of adverse cardiac events and to diagnose them. An ambulatory measurement setting is an important and demanding condition for the recording and analysis of these signals. Sophisticated and robust signal analysis schemes are thus increasingly needed. In this thesis, essential points related to ambulatory data acquisition and analysis of cardiovascular signals are discussed including the accuracy and reproducibility of the variability measurement. The origin of artifacts in RR interval time series is discussed, and consequently their effects and possible correction procedures are concidered. The time series including intervals differing from a normal sinus rhythm which sometimes carry important information, but may not be as such suitable for an analysis performed by all approaches. A significant variation in the results in either intra- or intersubject analysis is unavoidable and should be kept in mind when interpreting the results. In addition to heart rate variability (HRV) measurement using RR intervals, the dy- namics of ventricular repolarization duration (VRD) is considered using the invasively obtained action potential duration (APD) and different estimates for a QT interval taken from a surface electrocardiogram (ECG). Estimating the low quantity of the VRD vari- ability involves obviously potential errors and more strict requirements. In this study, the accuracy of VRD measurement was improved by a better time resolution obtained through interpolating the ECG. Furthermore, RTmax interval was chosen as the best QT interval estimate using simulated noise tests. A computer program was developed for the time interval measurement from ambulatory ECGs. This thesis reviews the most commonly used analysis methods for cardiovascular vari- ability signals including time and frequency domain approaches. The estimation of the power spectrum is presented on the approach using an autoregressive model (AR) of time series, and a method for estimating the powers and the spectra of components is also presented. Time-frequency and time-variant spectral analysis schemes with applica- tions to HRV analysis are presented. As a novel approach, wavelet and wavelet packet transforms and the theory of signal denoising with several principles for the threshold selection is examined. The wavelet packet based noise removal approach made use of an optimized signal decomposition scheme called best tree structure. Wavelet and wavelet packet transforms are further used to test their effciency in removing simulated noise from the ECG. The power spectrum analysis is examined by means of wavelet transforms, which are then applied to estimate the nonstationary RR interval variability. Chaotic modelling is discussed with important questions related to HRV analysis.ciency in removing simulated noise from the ECG. The power spectrum analysis is examined by means of wavelet transforms, which are then applied to estimate the nonstationary RR interval variability. Chaotic modelling is discussed with important questions related to HRV analysis.
APA, Harvard, Vancouver, ISO, and other styles
49

Settipalli, Praveen. "AUTOMATED CLASSIFICATION OF POWER QUALITY DISTURBANCES USING SIGNAL PROCESSING TECHNIQUES AND NEURAL NETWORKS." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/430.

Full text
Abstract:
This thesis focuses on simulating, detecting, localizing and classifying the power quality disturbances using advanced signal processing techniques and neural networks. Primarily discrete wavelet and Fourier transforms are used for feature extraction, and classification is achieved by using neural network algorithms. The proposed feature vector consists of a combination of features computed using multi resolution analysis and discrete Fourier transform. The proposed feature vectors exploit the benefits of having both time and frequency domain information simultaneously. Two different classification algorithms based on Feed forward neural network and adaptive resonance theory neural networks are proposed for classification. This thesis demonstrates that the proposed methodology achieves a good computational and error classification efficiency rate.
APA, Harvard, Vancouver, ISO, and other styles
50

Tolosa, Thiago Antonio Grandi de. "Uma proposta para análise otimizada de correntes transitórias em materiais condutores." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-16082010-143424/.

Full text
Abstract:
Neste trabalho é desenvolvida uma metodologia baseada na aplicação do método dos momentos conjugado à aproximação por diferenças finitas, para análise no domínio do tempo da distribuição de correntes transitórias em meios condutores. O procedimento computacional permite que sejam considerados sistemas de condutores longos, com seção transversal genérica, discretizados em elementos filiformes. Assim, a equação integral que descreve o problema é substituída por uma representação matricial. No caso de meios não homogêneos, o método dos elementos finitos é usado para a obtenção da matriz que permite a solução do problema. Como o método proposto baseia-se na solução passo a passo no tempo, é interessante, para maior eficiência computacional, que as matrizes tenham sua dimensão reduzida, o que pode ser realizado por meio da Transformada Discreta Wavelet, como investigado no trabalho. Também é feita uma análise da estabilidade do procedimento computacional, para verificação das condições de aplicabilidade do mesmo. A validação do procedimento desenvolvido é feita a partir da comparação dos resultados obtidos para problemas com solução já conhecida por meio de medições ou por outros métodos de resolução. São apresentados resultados da aplicação do método proposto a alguns casos de interesse prático, na área de Compatibilidade Eletromagnética, destacando-se a análise do efeito de blindagem e o estudo de crosstalk em um sistema multicondutor. O procedimento desenvolvido ainda pode ser aprimorado com a utilização de elementos com formas mais gerais do que a filiforme proposta, permitindo sua extensão a problemas tridimensionais. Também pode ser estudado o pré-condicionamento da matriz a ser reduzida pela aplicação da Transformada Wavelet, a fim de permitir um melhor desempenho do processo computacional.
In this work, a methodology is developed, based on the application of the moment method associated to a finite difference approximation, for time domain analysis of transient current distribution in conducting media. The computational procedure allows that long conductors with any cross section can be considered, approximated by filamentary elements. Thus, the integral equation that describes the problem is substituted by a matrix representation. In the case of non homogeneous regions, the finite element method is used to obtain the matrix that allows the solution of the problem. As the proposed method works in a time step by step scheme, it is interesting, for increased computational efficiency, to reduce the size of the involved matrices. This can be done by application of the Discrete Wavelet Transform, as investigated in this work. An analysis of the numerical stability of the computational procedure is also done, aiming to establish its applicability conditions. Validation of the numerical procedure developed is done by comparing the results obtained for problems whose solution is known by measurement or by another method. Some problems of practical interest in Electromagnetic Compatibility were solved and the results are presented, concerning particularly shielding effects and multi conductor crosstalk. The developed procedure can be improved employing elements with more general shapes, besides the filamentary ones, allowing the consideration of three dimensional systems. Also, pre conditioning of the matrices before application of the Wavelet Transform can be studied, for a better performance of the method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography